Posts tagged: ux

New UX Theme

By , December 8, 2020 11:49 am

We’ve been working on a new, calmer UX Theme for a while.

The aim is to reduce the number of lines, have less visual clutter, better demarcation of boundaries, consistent colour use across all tools, and to make everything just a bit simpler and calmer to look at.

The best way to show you these changes to show before and after images of each change.

Dialog Titles

Before: Sections are indicated by text with a horizontal line next to it.

After: Sections are indicated by bold text in Software Verify blue with no horizontal line.

Settings grids

Before: Vertical and horizontal grid lines are part of the display.

After: Vertical and horizontal grid lines are minimised with the boundaries between adjacent lines indicated by a subtle change of background colour.

Data grids

Before: Vertical and horizontal grid lines are part of the display.

After: Horizontal grid lines are minimised with the boundaries between adjacent lines indicated by a subtle change of background colour. Vertical grid lines are present, but very subtle so as not to intrude on the display.

Grid highlighting

Before: There was no automatic grid highlighting.

After: When you move the mouse over a grid the line under the mouse automatically highlights in light blue. This can be very useful on wide grids with many columns.

The grid highlighting functionality does not change which items are selected in the grid. It is purely a visual aid. The grid highlighting works for both data grids and settings grids.

Tab Headings

Before: Tabs were displayed with the current tab in bold.

After: Tabs are displayed with all tabs in Software Verify blue and the current tab is bold with the orange highlight colour.

Graphics – Circles

Before: Circles were displayed with outlines and pie section separators.

After: Circles are displayed without outlines and with no pie section separators.

Graphics – Bars

Before: Bars were displayed with outlines.

After: Bars are displayed without outlines.

Splitter Windows

Before: The splitter and the edges of the window were highlighted.

After: The splitter is highlighted. The edges are not highlighted.

Toolbar and menu icons

Before: Our previous icon set was 3D and looked a bit tired.

After: The new icon set is flat and uses the Software Verify colour palette.


These changes in isolation probably don’t look like much, but when you see them all together it makes for a more pleasant experience. The effect is magnified when you’re looking at a lot of data – having less clutter. It’s a subtle but important thing.

These changes will be rolled out across all our tools, both free and commercial, in the weeks following 8 December 2020.

Customers that have purchased tools and that have valid software maintenance will be emailed when software updates containing these changes are available for them to download.

We’ve been quiet for a while, sorry about that.

By , November 28, 2016 3:29 pm


It’s been a while since we posted anything on the blog. If you weren’t a customer, regularly receiving our software update emails you might think we weren’t going anything.

That’s an oversight on our part. We’re hoping to rectify this over the next few months, posting more useful information both here and in the library.

_tempnam and friends

Our most recent update has been to update C++ Memory Validator provide memory tracking for the _tempnam group of functions. These are _tempnam, _tempnam_dbg, _wtempnam, _wtempnam_dbg.

This support is for all supported compilers, from Visual Studio 2015, Visual Studio 2013, Visual Studio 2012, Visual Studio 2010, Visual Studio 2008, Visual Studio 2005, Visual Studio 2003, Visual Studio 2002, Visual Studio 6. Delphi and C++ Builder, Metrowerks compiler, MingW compiler.

.Net support, for the future

Internal versions of C++ Coverage Validator can provide code coverage statistics for .Net (C#, VB.Net, J#, F#, etc) as well as native languages (C++, C, Delphi, Fortran 95, etc).

Internal versions of C++ Performance Validator can provide performance profiling statistics for .Net (C#, VB.Net, J#, F#, etc) as well as native languages (C++, C, Delphi, Fortran 95, etc).

UX improvements

All tools, free and paid, have had the UX for filename and directory editing improved so that if a filename doesn’t exist it is displayed in red and if it does exist it is displayed in it’s normal colour (typically black). See screenshots (from Windows 8.1).

Non existent filename:

Existing filename:

Changes to injection behaviour

By , July 23, 2015 4:16 pm

We’ve just changed how we launch executables and attach to them at launch time. We’ve also changed how we inject into running executables. This blog post outlines the changes and the reasoning behind the changes.

The injected DLL

Microsoft documentation for DllMain() states that only certain functions can be called from DllMain() and you need to be careful about what you call from DllMain(). The details and reasons for this have never been made very explicit, but the executive summary is that you risk getting into a deadlock with the DLL loader lock (a lock over which you, as a programmer, have no control).

Even though this dire warning existed from the first alpha versions of our tools in 1999 until July 2015 we’ve been starting our profilers by making call from DllMain when it receives the DLL_PROCESS_ATTACH notification. We’ve never had any major problems with this, but that’s probably because we just tried to keep things simple. There are some interesting side-benefits of starting your profiler this way – the profiler stub just starts when you load the profiler DLL. You don’t need to call a specific function to start the profiler DLL. This is also the downside, you can’t control if the profiler stub starts profiling the application or not, it always starts.

Launching executables

Up until now the ability for the profiler to auto-start has been a useful attribute. But we needed to change this so that we could control when the profiler stub starts profiling the application into which it is injected. These changes involved the following:

  • Removing the call to start the profiler from DllMain().
  • Adding additional code to the launch profiler CreateProcess() injector to allow a named function to be looked up by GetProcAddress() then called.
  • Changing all calls to the launch process so that the correct named function is identified.
  • Finding all places that load the profiler DLL and modifying them so that they know to call the appropriate named function.

Injecting into running executables

The above mentioned changes also meant that we had to change the code that injects DLLs into running executables. We use the well documented technique of using CreateRemoteThread() to inject our DLLs into the target application. We now needed to add the ability to specify a DLL name, a function name, LoadLibrary() function address and GetProcAddress() function address and error handling into our dynamically injected code that can be injected into a 32 bit or 64 bit application from our tools.

Performance change

A useful side effect of this change from DllMain() auto-start to the start function being called after the DLL has loaded is that thread creation happens differently.

When the profiler stub starts via a call to the start profiler from DllMain() any threads created with CreateThread()/beginthread()/beginthreadex() all wait until the calling DllMain() call terminates before these threads start running. You can create the threads and get valid thread handles, etc but they don’t start working until the DllMain() returns. This is to part of Microsoft’s own don’t-cause-a-DLL-loader-lock-deadlock protection scheme. This means our threads that communicate data to the profiler GUI don’t run until the instrumentation process of the profiler is complete (because it all happens from a call inside DllMain()).

But now we’ve changed to calling the start profiler function from after the LoadLibrary() call all the threads start pretty much when we create them. This means that all data comms with the GUI get up to speed immediately and start sending data even as the instrumentation proceeds. This means that the new arrangement gets data to the GUI faster and the user of the software starts receiving actionable data in quicker than the old arrangement.

UX changes

In doing this work we noticed some inconsistencies with some of our tools (Coverage Validator and Thread Validator for instance) when working with an elevated Validator and non-elevated target application if we were injecting into the running target application. The shared memory used by these tools to communicate important profiling data wasn’t accessible due to permissions conflicts between the two processes. This was a problem because we were insisting the the Validator should be elevated prior to performing an injection into any process.


A bit of experimentation showed that under the new injection regime described above that we didn’t need to elevate the Validator to succeed at injecting into the target non-elevated application. It seems that you get the best results when the target application and the Validator require the same elevation level. This is also important for working with services as they tend to run with elevated permissions these days – but injecting into services is always problematic due to different security regimes for services.

This insight allowed us to remove the previously mandatory "Restart with administrator privileges" dialog and move any potential request to elevate privileges into the inject wizard and inject dialog. In this article I will describe the inject dialog, the changes to the inject wizard are similar with minor changes to accommodate the difference between dialog and wizard.


Depending upon Operating System and the version of the software there are two columns that may or may not be displayed on the inject dialog. The display can be sorted by all columns.

Elevation status

When running on Windows Vista or any more recent operating system the inject dialog will display an additional column titled Admin. If a process is running with elevated permissons this column will display an administrator permissions badge indicating the elevation may be required to inject into this process.

Processor architecture

When running 64 bit versions of our tools an additional column, titled Arch will be added to the inject dialog. This column indicates if the process is a 32 bit process or 64 bit process. We could have added a control to allow only 32 bit or 64 bit processes to be displayed but our thinking is that examining this column will be something is only done for confirmation if the user of our tools is working on both 32 bit and 64 bit versions of their tools. As such having to find the process selector and select that you are interested in 32 bit tools is overhead the user probably doesn’t need.

UX Improvements for Coverage Validator

By , August 3, 2012 11:36 am

We recently released new versions of the Coverage Validator tools for all languages.

The main reason for this release was to make the tools more usable and make using them more satisfying. This work was inspired by some user experience research we commissioned with Think UI.

We’re so happy with these improvements we thought we’d share them so that you can learn from our improvements. We’re not finished with the Coverage Validator tools. This is just the start of changes to come.

I’m specifically going to talk about C++ Coverage Validator, but these improvements cut across all our Coverage Validator tools. Some of the improvements cut across all our development tools.

Summary Dashboard

The first thing a user of Coverage Validator sees is the summary dashboard.

The previous version of this dashboard was a grid with sparse use of graphics and lots of text. You had to read the text to understand what was happening with the code coverage for the test application. Additional comments and filter status information was displayed in right hand columns.

Coverage Validator old dashboard

The new version of this dashboard is split into two areas. The top area contains a dial for each metric reported. Each dial displays three items of information: Number of Items, Number of Items visited. How many items are 100% visited. This is done by means of an angular display for one value and a radial display for another value. A couple of the dials are pie charts.

The bottom area of the dashboard displays information that is relevant to the recorded session. Any value that can be viewed or edited is easily reachable via a hyperlink.

Coverage Validator new dashboard

The result of these changes are that the top area makes it easy to glance at a coverage report and instantly know which session has better coverage than another session. You don’t need to read the text to work it out. The bottom area draws attention to instrumentation failures (missing debug information, etc) and which filters are enabled etc. By exposing this information in this way more functionality of Coverage Validator is exposed to the user of the software.

Coverage Dials

We developed a custom control to display each coverage dial.

A coverage dial displays both the amount of data that has been visited, the amount of data that is unvisited and the amount that has been completely visited. For metrics that do not have a partial/complete status the dial just displays as a two part pie chart. An additional version displays data as a three part pie chart. This last version is used for displaying Unit Test results (success, failure, error).

Coverage dial directories Coverage dial unit tests

The difference between unvisited coverage and visited coverage is displayed using an angular value. Items that have been completely visited (100% coverage) are displayed using a radial value emanating from the centre of the dial. Additional information is displayed by a graded colour change between the 100% coverage area and the circumference of the circle to indicate the level of coverage in partially covered areas.

The coverage dial provides tooltips and hyperlinks for each section of the coverage dial.

Dashboard Status

The dashboard status area shows informational messages about the status of code instrumentation, a filter summary, unit test status and session merging status. Most items are either viewable or editable by clicking a hyperlink.

Dashboard status

To implement the hyperlink we created a custom control that supported hyperlinks with support for email hyper links, web hyper links and C++ callbacks. This provides maximal functionality. The hyperlinks are now used in many places in our tools – About box, evaluation feedback box, error report boxes, data export confirmation boxes, etc.

Coverage Scrollbar

We’ve also made high level overview data available on all the main displays (Coverage, Functions, Branches, Unit Tests, Files and Lines) so that you can get an overview of the coverage of each file/function/branch/etc without the need to scroll the view.

We thought of drawing the coverage data onto the scrollbar. Unfortunately this means that you would need an ownerdrawn scrollbar, but Windows does not provide such a thing. An option was to use a custom scroll bar implementation. But doing that would mean having to cater to every different type of Windows scrollbar implementation. We didn’t think that was a good idea. As such we’ve chosen to draw the coverage overview next to the scroll bar.

Coverage scroll bar

Editor Scrollbar

Similarly to the overview for each type of data we also provide a high level overview for the source code editor.

Editor code coverage

Directory Filter

Coverage Validator provides the ability to filter data on a variety of attributes. One of these is the directory in which a file is found. For example if the file was e:\om\c\svlWebAPI\webapi\ProductVersion\action.cpp the filter directory would be e:\om\c\svlWebAPI\webapi\ProductVersion\.

This is useful functionality but Coverage Validator allows you to filter on any directory. In complex software applications it’s quite possible that you would want to filter on a parent directory or a root directory. That would give the following directories for the example above.


The solution to this problem is to create the context menu dynamically rather than use a preformed context menu stored in application resources. Additionally it is more likely that the current directory will be filtered rather than the parent, so it makes sense to reverse the order of the directories, going from leaf to root.

Coverage filter directory context menu

Instrumentation Preferences

The instrumentation preferences dialog is displayed to the user the first time that Coverage Validator starts. The purpose of this dialog is to configure the initial way coverage data is collected. This provides a range of performance levels from fast to slow and incomplete visit counts to complete visit counts. Both options affect the speed of execution of the software. Given that time to complete is an important cost for this is an option that should be chosen carefully.

Previous versions of the software displayed a wordy dialog containing two questions. Each question had two choices.

Coverage instrumentation preferences (old)

The new version of the instrumentation preference dialog has replaced the questions with a sliding scale. Two questions with two choices is effectively four combinations. The instrumentation level sliding scale has four values. As the slider value changes the text below the slider changes to provide a brief explanation of the instrumentation level chosen.

An additional benefit is that the previous version only implied the recommended values (we preset them). The new version also implies the recommended value (also preset) but also explicitly indicates the recommended instrumentation level.

This new design has less words, less visual clutter, is easier to use and presents less cognitive load on the user.

Coverage instrumentation preferences (new)

Export Confirmation

Coverage Validator provides options to export data to HTML and XML. A common desire after exporting is to view the exported data. Previous versions of Coverage Validator overlooked this desire, no doubt causing frustrating for some users. We’ve rectified this with a confirmation dialog displayed after exporting data. The options are to view the exported file or to view the contents of the folder holding the exported data. An option to never display the dialog again is also provided.

Coverage export confirmation

Debug Information

The previous version of the debug information dialog was displayed to the user at the end of instrumentation. After the user dismissed the dialog there was no way to view the data again. The dialog was simply a warning of which DLLs had no debug information. The purpose of this was to alert the user as to why a given DLL had no code coverage (debug information is required to provide code coverage).

The new version of debug information dialog is available from the dashboard. The new dialog displays all DLLs and their status. Status information indicates if debug information was found or not found and if Coverage Validator is interested in that DLL, and if not interested, why it is not interested. This allows you to easily determine if a DLL filter is causing the DLL to be ignored for code coverage.

Coverage Debug Information

When the dialog is displayed a Learn More… link is available. This presents a simple dialog providing some information about debug information for debug and release builds. We’ve used a modified static control on these dialogs to provide useful bold text in dialogs (something that you can’t do with plain MFC applications). It’s a small thing but improves structure of the dialog. This text was displayed as part of the previous debug information dialog. Removing this text to a separate dialog chunks the information making it more accessible.

Coverage Debug Information Learn More

There is more to be done with this part of the software but this is an improvement compared to previous versions.

Tips Dialog

Coverage Validator has always had a “Tip of the day” dialog. This is something of a holdover from earlier forms of application development. We’d never really paid much attention to it, to how it functioned, how it behaved and what it communicated.

Tip of the day dialog

We’re planning to completely overhaul this dialog but that is a longer term activity. As such in this revision we’ve just made some smaller scale changes that still have quite an impact.

Tips dialog

The first change is that the previous “Tip of the day” dialog was displayed at application startup but with the new version the “Tips” dialog is no longer displayed at application startup. The tips dialog is now displayed when you launch an application and are waiting for instrumentation to complete. This means you get displayed tips during “dead time” that you can’t really use effectively – you’re waiting for the tool. The tips dialog is still available from the Help menu as was the previous Tip of the day dialog.

The second change is that the new Tips dialog is modeless. The previous Tip of the day dialog was modal. This means you can leave the dialog displayed and move it out of the way. You don’t have to dismiss it.

We’ve done away with the icon and replaced it with a tip number so you know which tip you are viewing. Tips are no longer viewed sequentially (Next Tip) but in a random order. At first this seems like a crazy thing to do. But when you try it, it actually increases your engagement. You’re wondering how many tips there are and which one you’ll get next. Hipmunk was an inspiration for this – they do something similar when calculating your plane flights (I hadn’t seen this when I used Hipmunk but Roger from ThinkUI had seen it).

There is more to be done with this part of the software but this is a useful improvement until our completely reworked Tips dialog is ready for release.


All of the changes have been made to improve and simplify the way information is communicated to the user of Coverage Validator. Improved graphics displays, interactive dashboards, better data dialogs, hyperlinks and occasional use of bold text all improve the user experience.

We’re not finished improving Coverage Validator. These are just our initial round of improvements.

User experience, alcohol and Buddhism

By , March 7, 2011 1:31 pm

Last Thursday I attended an event hosted by Red Gate Software with the topic “User Experience in Software Development”. I also met with Roger Atrill at the event – we used to work together at (the now defunct) Laser Scan. After the event I went swimming, swam 1km (physiotherapy) and while getting changed chatted with another swimmer about my evening (two bottles of Grolsch provided by Red Gate, 3 and half talks – had to skip half the last one to get to the swimming pool) and the swimming. We were both surprised my swimming was still quite rapid despite the 1 pint(ish) of Grolsch I had drunk. It was during this chat I realised a connection between Buddhism and user experience testing.

The event consisted of four talks about user experience.

The first talk was about the challenges of creating a website for a councill and balancing the conflciting desires of many council service providers and a useful user experience finding your way to the right part of the website to find out if (for example) your local school is closed because of the snowfall overnight (no big deal where you live, but snow is rare in the UK and a heavy snowfall causes chaos here because we are not equipped to deal with it).

Red Gate
The second talk was about how Red Gate managed the challenging task of going from their 1500 page hard to navigate website to a smaller 750 page easier to use website. Some of the techniques they used were one room for all the work, complete transparency, post it notes for everything, anyone (even non-ux team) could walk in an annotate prospective design changes, allowing people to comment on work when the ux team were not present. Don’t put colour in your mockups because then people argue about incorrect branding rather than looking at the ux. Don’t create real webpages, keep it on paper or using mockup tools like Balsamiq, but even Balsamiq can be “too realistic”.

The third talk was about user testing from the perspective of someone whose users are mainly scientists and for whom computer use is only 10% of their daily task – the software needs to be easy to use and obvious. Simple things like naming items “Literature” is not helpful – too obscure. Choose a more useful name. And don’t use two tabs, tabs only work well when you have more than two tabs. Post it notes (super sticky, not regular) for everything and colour coded by topic to make things easier for analysis afterwards. Don’t talk too much. Everyone knows that one, but it still needs to be said.

Remote Testing
The fourth talk was about remote testing. I didn’t see much of this talk as the first three talks had overrun and I had to leave at a given time. Fortunate for me as I think this would have been the least useful of the talks (as I had already used some of the tools they were going to talk about).

Being present
One Buddha thinking about user testing of the key aspects of user experience testing is being able to observe without influencing the test. You can do this with isolated rooms and one way mirrors, but there is still influence at work – the fact that this room (and possibly this computer) is not the room the user normally uses. Or you can test at the user’s site in their normal room using their chair and desk. But you will need to be present and there will not be a helpful one way mirror (unless you are testing inside a police interview room!). Your presence may influence the test. In fact, it probably will. You will be tempted to offer the user helpful hints when they get stuck. Or maybe they are thinking and you ask them a question and break their train of thought. After all, you thought they’d gone quiet because they were stuck. Its a bit of Schrodinger’s Cat. You don’t know if its dead, but if you look it will be dead. Hmmm.

But there is another part to user testing. Separate from you, the tester, influencing the user. And that is being present in the moment. Actually watching the user, observing what they are really doing, not what you think they are doing. Not drifting off into some other train of thought, whether it be able why they clicked there five minutes ago, or what they’ll do on the next page (which you know really sucks and does need work), or about something unrelated like Star Wars or your girlfriend. You need to be present in the moment. Noticing what is happening, why it happened and noting it down.

In that respect I think user testing and Buddhism have more in common that most folks realise. Buddhism is all about being present in the moment. Not off on some fleeting journey somewhere else. Not in the past, not in the future, not in some drunken haze because you got blittered last night. Many folks think Buddhism and Islam ban the consumption of alcohol. They do not. But they do ban the consumption of such quantities that cause you to lose your focus on the moment.

Next time you find yourself drifting off in your user experience testing, think about changing your focus. Be present.

And if you find yourself too agitated to focus on user testing, you may want to consider some meditation classes (Buddhist or otherwise) to learn how to be calm and in the moment.

Panorama Theme by Themocracy