Testing Graphics Protocols – Try Chinese Text – With the aid of some ropey pseudo-maths!
In HDX we use a variety of compression and text recognition algorithms to ensure bandwidth is used efficiently but also that screen elements like text or CAD wireframe/hidden-line are clear and sharp. When testing graphics protocols, we often try to ensure that we don’t just look at the average or mass usage case (in the case of a product like XenDesktop that’s often test workloads like Microsoft Office on windows laptops as the end points). We also look at those use cases which are most challenging e.g. using very old and low-powered Linux thin-clients as end-points because if we can get performance and quality in such scenarios the user experience is better for all our users.
In the case of text quality, we often look at non-English fonts because compression artefacts often are more perceivable to the user under the same conditions compared to English. Someone asked me a few weeks ago why this was. I’ve explained before about some of the underlying principles of compression techniques and how they affect the appearance of text, describing text challenges for H.264 codecs, read it here; I also wrote about the challenges of text due to the underlying Fourier compression techniques used in JPEG (read it here). However, this time when asked I tried to explain without resorting to too much maths.
Read the entire article here, Testing Graphics Protocols – Try Chinese Text – With the aid of some ropey pseudo-maths!