Try to switch off shaders, in Preferences (in the menu) in the main VNL window, on the Graphics tab. You need to restart VNL after this.
This menu point is gray and unchecked. So, I assume it is off.
As a general note, your graphics driver is too old/simple to support high-performance graphics in VNL. If you want proper 3D graphics the machine should use vendor drivers (from NVidia or ATI or similar) rather than Mesa. That said, VNL should work anyway, we will look into why it fails - the shaders should have been automatically turned off.
In 99% scientists use remote servers for calculations, don't have physical access to them, and don't need high performance graphics. I don't care about graphics at ll. Actually, I have only two options: 1) ssh connection to hpc cluster (or dedicated server), or 2) VNC on request (VNC is more convenient). In the first case, I will construct system and run it from command line using batch system. In the second case, after constructions of the system I can either run it in VNC (using 1 server with 16-80 CPUs) or submit it using batch system (unlimited CPUs).
In my case, it is either dedicated HP DL585 with 4x16 cores, or HP SL230s with 2x8 cores. I even don't know what kind of graphics they have.
Moreover, why are you running VNL over VNC? Why not run it on your own computer, that way the graphics will probably be a lot faster (a lot!!!).
I don't need nice and fast graphics. I use VNL because it is convinient tools for the construction of nanodevoces. Running VNL locally on my laptop - is not a solution for many reasons.
Actually, I have more questions:
1. How many CPUs VNL can use effectively? When it was running on DL585 with 64 cores it was using 1, 10 or 32 CPUs in different moments of time. Why not all 64?
2. When running on cluster, I can specify as many CPUs as I want. But taking into account what I noticed about CPU usage on 1 server, it seems that running on cluster will just waste of computational time. So, can VNL run effectively on cluster?