The RP Photonics Software News
Using Monitors With High Screen Resolution
Posted on 2018-11-30 in the RP Photonics Software News (available as e-mail newsletter!)
Permanent link: https://www.rp-photonics.com/software_news_2018_11_30.html
Computer monitors with relatively high screen resolution gradually become more popular, but unfortunately the display on such monitors is often subject to technical problems which lead either to the too small display of elements or a blurred (not sharp) display. Here, I explain the essence of these problems, how they should be solved and how we handle them in our own software. The matter is somewhat technical, but I think it is useful to know at least roughly how these things work – whether you are using our software or any other one.
The Old Way
I do not go back to the early times of analog displays; those are no more relevant now. A digital monitor has a certain number of pixels in horizontal and vertical direction, each one having a fixed size which is determined by the design. (Precisely speaking, with “size” we usually mean the distance from the center of a pixel to the center of the next one, i.e., we include any space in between the actually illuminated parts.) Usually, one will operate a computer monitor at its “native” resolution, which means that the computer exactly maps certain RGB (red-green-blue) data to each pixel of the monitor. One way in principle set a lower monitor resolution in the operating system (e.g. Windows) in order to obtain elements displayed in larger size, but this is not recommended due to the reduced display quality.
A common type of monitor with FHD resolution (1920 × 1080 pixels, 16:9 format) could have a screen diagonal of 24 inches, which implies a pixel density of 92 dpi (dots per inch = pixels per inch) and a dot pitch (pixel spacing) of 0.277 mm. If you take a larger screen with 27 inch diagonal, for example, and the same FHD resolution, the pixels get somewhat larger (0.311 mm), and we have only 82 dpi. So the same items on the screen will get somewhat larger, but you may also use such a screen from a slightly larger distance, and effectively it will not make a big difference for you.
For a long time, the DPI values of different digital monitors were quite similar. Therefore, software developers could determine the size of a certain element (e.g., the width of a dialog window) as some number of pixels, which would then lead to an effective size on the screen which is appropriate on different devices. This normally works even on notebooks, typically having much smaller screens then we use on the desktop while having a similar number of pixels; notebooks are viewed from a smaller distance.
However, we get into trouble when someone uses a screen with substantially higher DPI resolution. For example, I have a notebook where the display has 3200 × 1800 pixels and a screen diagonal of only about 15 inches, which leads to 245 dpi. When using a “normal” size of a dialog window in pixels, this will be far too small, and the program will hardly be usable – only with magnifying glasses!
How is Windows Handling the Issue?
Nowadays, operating systems take care of the issues with high-dpi displays. They can do this because the communications with digital displays are bidirectional: the computer can learn from the monitor not only how many pixels it supports, but also what is the display size in inches. The following discussion applies to Windows, with which I am most familiar, but our modern operating systems behave similarly.
First of all, Windows allows one to set a certain magnification (e.g. 150% or 200%) in the display settings, and it will start with a default setting which may be well above 100% when a high-dpi screen is recognized.
Windows itself then uses accordingly larger number of pixels for any elements it displays itself – for example, Windows Explorer. As a result, such things get the appropriate size and at the same time look very sharp due to the high resolution.
When a Windows application is running, it depends on the application what happens. Windows first tries to determine whether the application is “DPI-aware”; an application can communicate that to the operating system with a so-called manifest, and if that is missing, Windows will assume that it is not DPI-aware. In such cases, Windows will try to save the day by fooling the application: it simulates a screen with smaller resolution and then automatically scales up the display of all visible elements of the application. That way, all elements will be displayed with an appropriate size – but unfortunately not with good quality, since the mention scaling process is not perfect. You will often recognize that the display e.g. of characters or line graphics appears somewhat blurred.
You can actually suppress that the scaling process: on Windows 10, right-click on the executable file (e.g. an .exe file), go to “Compatibility”, “Change high DPI settings”. However, this helps only if you really want the display to be correspondingly small. What you probably really want is that the application uses more pixels – but that you (or Windows) cannot achieve, since it would require changes in the code of the application.
So ideally a Windows application would do the following:
- Ask Windows about the screen resolution and accordingly scale up the number of pixels used for various things.
- Declare “DPI-awareness” in the manifest in order to prevent that Windows unnecessarily scales up the display.
Modern software should behaved like that, and then it works fine on screens with any resolution.
Fortunately, modern development platforms make it relatively easy for programmers to make code DPI-aware. In Delphi, for example, which I use for our software, one can continue using “virtual” resolution but get it scaled up automatically when the program starts. For example, one would make a dialog window 500 px wide (simply setting that size in the graphical form designer), but if the application is then executed on a computer with a high-DPI screen, that will automatically be increased e.g. to 750 or 1000 px before the code made by the programmer is executed. Only, one can no longer use fixed pixel sizes in the code itself. Well, it is not hard to do the proper scaling; it is only that one may overlook certain parts of old code when revising it for DPI-awareness. Therefore, it happens that certain isolated details in a user interface appear to small on high-resolution screens until the developer notices and corrects that.
Further Issues When Using Multiple Monitors
There are further technical issues in situations where you use multiple monitors, if those have substantially different screen resolutions. Advanced software exhibits a higher level of DPI-awareness, called “per monitor”. This is more challenging to implement. Ideally, a program should not only take into account different DPI values for different windows, if they appear on different monitors; it should also properly rescale an already made window when it is moved to another monitor! However, I don't want to go into further details here, as those would be relevant only for a minority of users.
How Does RP Photonics Software Treat the Issue?
Until recently, our software was not DPI-aware, and that resulted in a somewhat blurred display on high-resolution screens. Although that is usually not a severe problem, it is definitely not nice. Recently, the code was made DPI-aware, leading to a nicely sharp display on such screens.
One special detail deserve some attention. In some cases, users of our software enter certain sizes in units of pixels in script code. For example, one may create a graphical diagram with the following code:
diagram 1, size_px = (1000, 600): x: 0, 10 y: -3, +3 f: sin(x), color = red, width = 3
Here, the size of the diagram window is given in units of pixels. This now really means screen pixels, not the larger “virtual” pixels as it was when Windows did the scaling. That is convenient if you really want a certain number of pixels – for example, if you want to export the generated graphics as a file using a printed publication or on a webpage.
A caveat, however, is that scripts which you originally used on a display with common resolution will now display diagrams correspondingly smaller. If you do not want that, you can now have the scaling in terms of virtual pixels simply by inserting one character “v” into the definition of the diagram:
diagram 1, size_vpx = (1000, 600):
The software will then accordingly increase the use number of pixels over what you actually entered.
If you like this article, share it with your friends and colleagues, e.g. via social media: