Category: Coverage

Viewing source code that’s in the “wrong place”.

By , November 26, 2021 7:15 pm

You’ve been given a program to debug. You’ve got the EXE and DLLs, and you’ve got the PDB files, so you can tell when the filenames, line numbers and symbol names are.

So far so good.

You’ve also got access to the source code, but the source is not on your machine, it’s on the original machine that the EXE and DLLs were built on.

How do you view the source using the various Validator tools? The answer is the same for all the Validators, so I’ll just concentrate on showing you how to do this with Coverage Validator.

Build Machine: XPS13

First we need to create an application on the build machine and trying working with it on the test machine.

I’m going to create exampleApp on the machine xps13 on drive D:\ in the dev directory. It’s an MFC application built with the standard MFC app project creation wizard.

Drive D on xps13 is shared. Full path to the application: \\xps13\d\dev\exampleApp\exampleApp


Test Machine

On the test machine I’m going to create a similar directory structure to the test application, but it will only contain executable and PDB files. There will be no source code.


Coverage Validator : First Run

If we just run Coverage Validator without telling it where the source code is, the PDB file will be found and code coverage will be performed using the debug information, but it won’t be possible to view the source code because the source code paths point to d:\dev\exampleApp\exampleApp on the test machine. There is no source code on the test machine.

If you click on the filename on the left hand panel a Find source file dialog box is displayed because the source file can’t be found at the specified location, or in any location in the Coverage Validator settings.


Configuring Source Code Location

For the dialog box above there are three options, two of which are useful in this scenario:


  • Search Folder…. Use the Microsoft Folder dialog to navigate to the location of the source code on the build machine. For this example you need to choose the folder d:\dev\exampleApp\exampleAppon the networked machine xps13

  • File Locations…. Edit the Coverage Validator file location definitions. This will display a version of the File Locations tab which can be found on the Coverage Validator settings dialog.

The file location definitions can also be edited from the Coverage Validator settings dialog, which you can access via the Settings menu.


The first thing to note is that default display for File Locations is the location of PDB files. We need to add the path \\xps13\d\dev\exampleApp\exampleApp to the Source Files section.


  • Change the Path Type combo to Source Files.

  • Click Add… to add a path to the list of paths.

  • Type the path \\xps13\d\dev\exampleApp\exampleApp into the edit box.

  • Click OK to accept the new settings.


Viewing Source Code

Now that we have the source code location correctly configured if we click on the filename in the left hand panel we will see source code in the right hand panel.

It’s possible that for your application you have more than one path to your source code on the build machine. If that’s the case just add as many paths as you need to the Source Files section of the File Locations settings.


Conclusion

You’ve learned how to configure alternate locations to look for source code, useful for when the source code is in a different location than when the application was built.

Code coverage with NUnit and Coverage Validator

By , October 29, 2021 12:52 pm

In this blog post I’m going to give you an example for running .Net unit tests with NUnit and Coverage Validator. It’s the same process for .Net Core and C++.

I’m going to show how to do with with NUnit 2.7.1, but this method will work with any version of NUnit, 2.x or 3.x.

nunit-console.exe

We’re going to be testing with the console version of NUnit, nunit-console.exe. The program that runs the tests is nunit-console.exe, not a child process, so unlike working with VS Test, we don’t have to configure the application to monitor.

Video or step-by-step

I’ve created a video showing you how to configure Coverage Validator, but if you prefer step-by-step instructions, these are listed below the video in this blog post.

Coverage Validator

To get started we need to launch nunit-console.exe to run the tests.


Click the Rocket icon on the toolbar. This will display the Launch Application or Service dialog.


Choose Launch Native and .Net applications.

nunit-console.exe is a .Net application, so we’ll use the regular .Net and native launcher.

You can also launch using Launch->Applications->Launch Application…, or F4, these will take you straight to the launch dialog/wizard, skipping the previous dialog.

The Start a Native/.Net application dialog is displayed.


Now we have to configure the start application dialog. We’ve got to:


  • choose the application to launch

  • set the unit test DLL to test

  • set the startup directory

  1. Set the application to launch.

    Next to the Application to Launch field click Browse… and select your nunit-console.exe

  2. Example: E:\om\c\3RD_SRC\nunit\NUnit 2.7.1\bin\nunit-console.exe


    Note the that after editing the Application to Launch field the Application to Monitor field will auto-populate itself, choosing a default value for the application to monitor. For our purposes the default value should be identical to the application to launch.

  3. Arguments: Enter the full path to your DLL to test.

    In this example I’m going to test the Money.Tests.dll unit test DLL in the C# samples master which you can download from GitHub.

    Example: E:\om\c\3RD_SRC\nunit\nunit-csharp-samples-master\nunit-csharp-samples-master\money\bin\Debug\Money.Tests.dll



  4. Startup Directory. Enter a path for the startup directory. A default will have been set based on the Application to Launch, but for unit test work you’ll need a writeable directory, so you’ll need to edit this value to something appropriate.



  5. If you’re using the Launch Dialog click Launch.


    If you’re using the Launch Wizard click Next until you get to the last page of the wizard then click Start Application.





Results

For the test I’ve configured in this blog post, the results show code coverage for the unit tests.

There is no test framework code to filter out, no autogenerated code to filter out, just the results.

If you want to see how to filter results, take a look at the VS Tests code coverage article.



Any questions?

Hopefully this blog post has answered your questions about how to get code coverage with NUnit and Coverage Validator.

But you may have other questions. Please let us know at support@softwareverify.com and we’ll be pleased to help you.

Code coverage with VS Test and Coverage Validator

By , October 26, 2021 9:33 am

In this blog post I’m going to give you an example for running .Net Core unit tests with VS Test (formerly MS Test) and Coverage Validator. It’s the same process for regular .Net and C++.

First, let’s discuss the program we’re going to launch and the program we’re going to monitor.

vstest.console.exe

VS Tests are run by vstest.console.exe. So that’s the program we’re going to launch. On my machine the path is:

C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\TestWindow\vstest.console.exe

In this example I’m showing VS 2019 community edition, but it doesn’t matter which VS or version you use.

vstest.console.exe is not a .Net Core application (it’s a regular .Net application). You can check this with our free tool PE File Browser.

testhost.exe

vstest.console.exe executes the tests by running testhost.exe. We need to identify which testhost.exe to run (there will be several installed on your machine) and then configure Coverage Validator to monitor that testhost.exe when vstest.console.exe is run.

We haven’t worked out a way of identifying which testhost.exe VS Test is going to use, but once you’ve found it that will be the one forever.

On my machine testhost.exe is in c:\users\stephen\.nuget\packages\microsoft.testplatform.testhost\16.5.0\build\netcoreapp2.1\x64\testhost.exe

Note that despite the path testhost.exe itself is not a .Net Core application (it’s a regular .Net application).

Video or step-by-step

I’ve created a video showing you how to configure Coverage Validator, but if you prefer step-by-step instructions, these are listed below the video in this blog post.

Coverage Validator

To get started we need to launch vstest.console.exe to run the tests.


Click the Rocket icon on the toolbar. This will display the Launch Application or Service dialog.


Choose Launch Native and .Net applications.

Although we’re going to monitor code coverage in a .Net Core DLL, the application we’re going to launch to do this is not a .Net Core application, so we’ll use the regular .Net and native launcher.

You can also launch using Launch->Applications->Launch Application…, or F4, these will take you straight to the launch dialog/wizard, skipping the previous dialog.

The Start a Native/.Net application dialog is displayed.


Now we have to configure the start application dialog. We’ve got to:


  • choose the application to launch

  • edit the applications to monitor

  • set the application to monitor

  • set the arguments for the unit test

  • set the startup directory

  1. Set the application to launch.

    Next to the Application to Launch field click Browse… and select your vstest.console.exe

  2. Example: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\TestWindow\vstest.console.exe


  3. Edit the applications to monitor. This is a multi-stage process. If the application to monitor has never been configured for the application being launched, you will need to configure the applications that can be set as possible applications that can be monitored. If the application to monitor has been configured, you can edit it to add or remove applications, or you can just use the selection of configured applications.

    Note the that after editing the Application to Launch field the Application to Monitor field will auto-populate itself, choosing a default value for the application to monitor. If this has never been configured before it will choose the same application as the application being launched (in this case vstest.console.exe). If this has been configured it will choose the default application for the configuration.

    For this blog post I’m going to assume that the application has never been configured, and show you how to configure it. Editing is just a subset of these actions and does not need it’s own blog post. If you already have applications to monitor configured you can skip this step.

    Next to Application to Monitor, click Edit…




    • The Applications to Monitor dialog is displayed.


    • We need to add an application to the list of application to monitor.

      On the Applications to Monitor dialog click Add…



      • The Application to Monitor dialog is displayed.


        Note that the EXE and DLL are set to represent the current application you are launching. For Native and .Net applications the DLL field will be empty (only set for .Net Core applications).

        If you wish to edit these value to configure for a different application than the one you are launching you can edit these via the Edit… button.


      • On the Application to Monitor dialog click Add…




        • The Application and DLL dialog is displayed.

        • On the Application and DLL dialog click Browse… and select your testhost.exe

          Example: c:\users\stephen\.nuget\packages\microsoft.testplatform.testhost\16.5.0\build\netcoreapp2.1\x64\testhost.exe


        • click OK to accept the EXE and DLL combination.



      • You should now have two entries, one for testhost.exe and one for the testhost.exe with a full path.


        You can repeat the Add… process for any additional applications you wish to configure.

        Optional: If you want to set this as the default application to monitor choose the appropriate entry in the Default application to monitor combo box.

      • click OK to accept the list of applications to monitor.

      The Applications to Monitor dialog should now show one entry for vstest.console.exe and testhost.exe.



    • click OK to accept these definitions of applications to monitor.


  4. Set the application to monitor.

    In the Application to Monitor combo select the entry for testhost.exe.

    We intend to monitor the first testhost.exe that is launched, so set the Launch count to 1.


  5. Arguments: Enter the full path to your DLL to test.

    Example: E:\om\c\testApps\unitTests\HelloWorldCore\HelloWorldCoreNUnitDotNet5\bin\Debug\net5.0\HelloWorldCoreNUnitDotNet5.dll



  6. Startup Directory. Enter a path for the startup directory. A default will have been set based on the Application to Launch, but for unit test work you’ll need a writeable directory, so you’ll need to edit this value to something appropriate.



  7. If you’re using the Launch Dialog click Launch.


    If you’re using the Launch Wizard click Next until you get to the last page of the wizard then click Start Application.





Results

For the test I’ve configured in this blog post, the results show code coverage for the unit tests, for the code tested by the unit tests, for some autogenerated code, and for some code in the test framework.

Some of the results are from code that isn’t your unit tests. You’ll need to filter these results by configuring Coverage Validator to ignore these code locations in future tests.



Filtering Autogenerated Code

To filter out of the autogenerated code, right click on the entry and choose Instrumentation Filter->Exclude Filename.

Filtered out entries are shown in grey. Next time you run the instrumentation they won’t be included in the coverage results.


Filtering Test Framework Code

To filter out the test framework code, right click on the entry and choose Instrumentation Filter->Exclude DLL.

Filtered out entries are shown in grey. Next time you run the instrumentation they won’t be included in the coverage results.


Any questions?

Hopefully this blog post has answered your questions about how to get code coverage with VS Test and Coverage Validator.

But you may have other questions. Please let us know at support@softwareverify.com and we’ll be pleased to help you.

What’s new with Coverage Validator

By , December 10, 2020 6:26 pm

There are many changes coming to Coverage Validator.

I’m going to describe the various changes and the reasons behind them.

Name Change

The first one is the name change. C++ Coverage Validator becomes Coverage Validator.

Because Coverage Validator will be capable of handling multiple technologies and languages having language specific designators prefixing Coverage Validator doesn’t make any sense.

New UX Theme

A new UX theme which is has less visual clutter and is calmer to look at has been introduced. We’ve written about that in New UX Theme.


.Net Support

Coverage Validator now supports .Net languages. C#, VB.Net, C++.Net, etc.

.Net Coverage Validator is discontinued, all of it’s functionality moving into Coverage Validator.

x64 and x86 support

C++ Coverage Validator shipped in two versions, a 32 bit version and a 64 bit version that could also process 32 bit executables.

Coverage Validator ships in a 64 bit version that can also process 32 bit executables. On a 32 bit machine the 32 bit version installs, on a 64 bit machine both 64 bit and 32 bit versions install, because occasionally there is a 32 bit native bug that you only deal with from the 32 bit version of the tool.

The reasons for this change are

  • .Net applications can be built in 32 bit, 64 bit, and Any CPU versions. An Any CPU version launched on a 64 bit machine will run as 64 bit. To provide full .Net support we couldn’t support Any CPU on 64 bit from 32 bit Coverage Validator. The sensible option is to only support Coverage Validator in a form that can support both 32 bit and 64 bit architectures.
  • 64 processors are the dominant processors in the market. We should support these by default.

New menu items

To support the new .Net functionality there are some additional launch options for working with ASP.Net applications (IIS and Web Development Server) as well as .Net applications and .Net services.


Launch ASP.Net application using IIS.


Launch ASP.Net application using Web Development Server.


New settings options

The settings dialog has two new panels to allow you to configure .Net Function Inlining and .Net Function Caching. The defaults are the values that an application would normally run with.

.Net Function Inlining


.Net Function Caching


New launch option

With the ability to process both native and .Net applications and mixed mode applications comes the desire to sometimes restrict code coverage to just native code, or just .Net code, or to allow any code (mixed mode) to be covered. To handle this we’ve added a simple combo dropdown on the various launch dialogs that allows you to choose how code coverage is handled at a very high level.


Availability

These changes to Coverage Validator will be available after 13 December 2020.

Monitoring a service with the NT Service API

By , February 11, 2020 5:21 pm

Debugging services is a pain. There is a lot that can go wrong and very little you can do to find out what went wrong. Perfect! Just what you need for an easy day at work. Services run in a restricted environment, these days you also need to be Administrator to do anything with them, and getting your favourite software tool which isn’t a debugger working with them is hard. I remember years ago seeing the list of things you needed to do to get NuMega’s BoundsChecker to work with services. It was a couple of web pages of instructions, each line containing a detailed step. You had to do all of the actions correctly in order to set things up to work with services.

These days Microsoft have changed the security landscape and it’s no longer possible to launch your data monitoring software tool from a service as that ability is correctly regarded as a security vulnerability. It’s also pretty much impossible to inject into a service from a GUI application. As a result the correct way to work with services is to add a few lines of glue code, in the form of calls to an API that setup communications with an already running user interface.

We’ve described our updated NT Service API in a previous article, so in this article I’m going to talk about the using the API to track errors in the service code calling the API and also describe how you use the user interface to work with services. This article will focus on C++ Memory Validator, but the techniques described here will also work for C++ Coverage Validator, C++ Performance Validator and C++ Thread Validator. If you’re using a .Net service, or a mixed mode service with a .Net entry point you don’t need to use the API, but the GUI parts of this article will still apply to you. If you using a native service or a mixed mode service with a native entry point all of this article applies to you.

Monitoring a Service

Before we get into the error codes and error handling in the GUI, let’s first take a tour of how things should work if everything goes to plan. This will provide some context for the errors I’m going to describe later. I’m going to assume you’ve built both the example service and the example service client, and that you’ve installed the service (serviceMV.exe -install in an Administrator mode command prompt). The service client passes a string to the service, which reverses it and passes it back to the client. The service also deliberately leaks some memory for testing purposes.

Here’s a video of the process.

From the Launch menu, choose Monitor a Service.


The Monitor a Service dialog is displayed.


Enter the full path to your service and click OK to start monitoring. The Validator will now setup some environment variables and some data in the registry that will be used by the service API. After a few seconds the Start your Service dialog appears.


Click OK, then start your service (you’ll need to do this from an Administrator command prompt).

serviceMV -start

The Validator attaches to the service and after a few moments various status information in the Validator title bar and the Validator status bar updated.

It is possible that you may get a debug information informational dialog displayed. You can dismiss this (it can be viewed from the Validator Tools menu). To change how symbols are found you’ll need to look at the Symbol Server and File Locations parts of the Validator settings dialog.


Next a dialog is displayed informing you that Administrator Privileges may be required.


For some services you may find that the Validator gets better data, or sends data to the GUI faster if the Validator is run in Administrator mode. If that is the case you’ll need to restart the Validator with Administrator privileges (and also stop and restart the service, etc).

For this particular example service, we don’t need Administrator privileges so we’ll continue without them.

Now we can interact with the service from the service client by sending a string to the service. The service reverses it and sends it back.

serviceClient "Hello World"


Once we’re done working with the service we can stop it (you’ll need to do this from an Administrator command prompt).

serviceMV -stop

The Validator disconnects from the service and displays all the data it has collected from the service.


That’s how it looks when everything goes according to plan.

What happens when things go wrong? That’s what the next section is about.

Tracking errors in the service

The various API functions return a SVL_SERVICE_ERROR error code. We’ve extended this code so that you can detect when the user has forgotten to do something prior to starting the service, or you can detect if various other error conditions have occurred. Some of these error codes are internal error codes and should never be seen by a customer, but we’re documenting them here for completeness.


  • SVL_FAIL_PATHS_DO_NOT_MATCH. Internal error. Looks like you forgot to Monitor a Service from the Launch Menu before starting the service.

  • SVL_FAIL_INCORRECT_PRODUCT_PREFIX. Internal error. Looks like you forgot to Monitor a Service from the Launch Menu before starting the service.

  • SVL_FAIL_X86_VALIDATOR_FOUND_EXPECTED_X64_VALIDATOR. Looks like you’re monitoring a 64 bit service with a 32 bit Validator. You need to use a 64 bit Validator.

  • SVL_FAIL_X64_VALIDATOR_FOUND_EXPECTED_X86_VALIDATOR. Looks like you’re monitoring a 32 bit service with a 64 bit Validator with the svl*VStubService.lib library. You need to use a 64 bit Validator with the svl*VStubService6432.lib.

  • SVL_FAIL_DID_YOU_MONITOR_A_SERVICE_FROM_VALIDATOR. Looks like you forgot to Monitor a Service from the Launch Menu before starting the service.

  • SVL_FAIL_ENV_VAR_NOT_FOUND. Internal error. Looks like you forgot to Monitor a Service from the Launch Menu before starting the service.

  • SVL_FAIL_VALIDATOR_ENV_VAR_NOT_FOUND. Internal error. Looks like you forgot to Monitor a Service from the Launch Menu before starting the service.

  • SVL_FAIL_VALIDATOR_ID_NOT_SPECIFIED. Internal error. Looks like you forgot to Monitor a Service from the Launch Menu before starting the service.

  • SVL_FAIL_VALIDATOR_ID_NOT_A_PROCESS. Internal error. Looks like you forgot to Monitor a Service from the Launch Menu before starting the service.

  • SVL_FAIL_VALIDATOR_NOT_FOUND. Internal error. Looks like you forgot to Monitor a Service from the Launch Menu before starting the service.

To aid in debugging we strongly recommend that you log all error codes (successful or failure) from Software Verify API calls. This will allow you to track down errors rapidly rather a series of trial error coding mistakes or a back and forth with support but with no information to help support. We added all of the above error codes after 3 customers all reported similar, but different problems with using the service API. All of their problems would be have been solved if these error codes had been available.

Error codes can be logged with this call.

void writeToLogFile(const wchar_t     *fileName,
                    SVL_SERVICE_ERROR errCode);

Helpful messages can be logged with this call.

void writeToLogFile(const wchar_t *fileName,
                    const wchar_t *text);

Error codes can be turned into human readable messages with this call.

const wchar_t *getTextForErrorCode(SVL_SERVICE_ERROR	errorCode);

And if you need to log Windows error codes, use this call.

void writeToLogFileLastError(const wchar_t *fileName,
                             DWORD         errCode);

See the help documentation for all the available API calls.

Tracking errors in the GUI

There are a couple of mistakes that can be made in the user interface. These are related to monitoring the wrong type of service, and the location of the service. Where it is possible to identify this error in the GUI, we will do so. Where it is not, the error codes described above will help you understand the mistake that has been made.

64 bit Service, 32 bit GUI

If you try to monitor a 64 bit service with a 32 bit GUI that will fail. We can detect this and prevent this. When this error happens you will be shown an error dialog similar to this.


Note that monitoring a 32 bit service with a 64 bit GUI is OK, but you need to use the svl*VStubService6432.lib not the svl*VStubService.lib. We can’t detect this from the GUI, which is why the SVL_FAIL_X64_VALIDATOR_FOUND_EXPECTED_X86_VALIDATOR error code exists – you will get this if you are linked to svl*VStubService.lib when you should be linked to svl*VStubService6432.lib.

Service on a network share

Windows won’t let you start a service on a network share. And yet I’ve lost count of the number of times I’ve tried to do this. This is typically because I have the solution working on machine X (where I wrote it) and wish to test on machine Y, and I just use a network share to map it across. This works for applications and fails for services. This can be a real time waster and Windows isn’t exactly helpful about this, and of course it’s in a service’s startup code, so fun debugging that.

To make this failure easier to detect we check the path of the service you specify in the Monitor a Service dialog and determine if the service is on a network share. If it is we tell you we can’t work with it. This then alerts you to the fact you’ll need to copy that service locally to run tests on it. Probably and hour or two of your time saved, right there.


Conclusion

Working with services can be fraught with problems, but if you log your error codes you can easily and quickly identify any errors made configuring your use of the NT Service API that we were unable to catch with the Validator user interface.

Getting code coverage for a child process?

By , May 31, 2017 5:43 pm

In this blog post I’m going to explain how to collect code coverage for a process that is launched by another process. We’ll be using C++ Coverage Validator to collect the code coverage.

For example you may have a control process that launches helper programs to do specific jobs and you wish to collect code coverage data for one of the helper programs. I’m first going to show how you do this with the GUI, then I’ll show you how to do this with the command line.

For the purposes of this blog post I’m going to use a test program called testAppFromOtherProcess.exe as the child program and testAppOtherProcessCpp.exe as the parent process. Once I’ve explained this for C++, I’ll also provide examples for programs launched from Java and for programs launched from Python.

The test program

The test program is simple. It takes two numbers and calculates the sum of all the products. If less than two arguments are supplied they default to 10.

int _tmain(int argc, _TCHAR* argv[])
{
	int	nx, ny;
	int	x, y;
	int	v;

	nx = 10;
	ny = 10;
	v = 0;

	if (argc == 2)
	{
		nx = _tcstol(argv[1], NULL, 10);
	}
	else if (argc >= 3)
	{
		nx = _tcstol(argv[1], NULL, 10);
		ny = _tcstol(argv[2], NULL, 10);
	}

	for(y = 0; y < ny; y++)
	{
		for(x = 0; x < nx; x++)
		{
			v += (x + 1) * (y + 1);
		}
	}

	return v;
}

The parent C++ program

The parent C++ program is a simple MFC dialog that collects two values and launches the test program. The code for launching the child process looks like this:

void CtestAppOtherProcessCppDlg::OnBnClickedOk()
{
	// get data values

	CString	str1, str2;
	DWORD	v = 0;

	GetDlgItemText(IDC_EDIT_COUNT1, str1);
	GetDlgItemText(IDC_EDIT_COUNT2, str2);

	// create command line

	CString	commandline;

	commandline += _T("testAppFromOtherProcess.exe");
	commandline += _T(" ");
	commandline += str1;
	commandline += _T(" ");
	commandline += str2;

	// run child process

	STARTUPINFO         stStartInfo;
	PROCESS_INFORMATION stProcessInfo;

	memset(&stStartInfo, 0, sizeof(STARTUPINFO));
	memset(&stProcessInfo, 0, sizeof(PROCESS_INFORMATION));

	stStartInfo.cb = sizeof(STARTUPINFO);
	stStartInfo.dwFlags = STARTF_USESHOWWINDOW;
	stStartInfo.wShowWindow = SW_HIDE;

	int	bRet;

	bRet = CreateProcess(NULL,
			(TCHAR *)(const TCHAR *)commandline,
			NULL,
			NULL,
			FALSE,
			0,
			NULL,
			NULL,
			&stStartInfo,
			&stProcessInfo);
	if (bRet)
	{
		// wait until complete then get exit code

		WaitForSingleObject(stProcessInfo.hProcess, INFINITE);

		GetExitCodeProcess(stProcessInfo.hProcess, &v);

		// tidy up

		CloseHandle(stProcessInfo.hProcess);
		CloseHandle(stProcessInfo.hThread);
	}

	// display result

	SetDlgItemInt(IDC_STATIC_VALUE, v, FALSE);
}

Configuring the target C++ program

Before we can collect code coverage we need to tell C++ Coverage Validator about the target program and the program that is going to launch it. We do this from the launch dialog (or launch wizard). From the launch dialog, select the program to launch using the Browse... button and selecting the file with the File dialog. Once a file has been chosen a default value will be selected for the Application to Monitor. This is the same program as you just selected with the File dialog.

CVLaunchDialogApplicationToMonitor

To allow us to monitor other programs we need to edit the list of applications we can monitor. Click the Edit... button to the right of the Application to monitor combo box. The Applications To Monitor dialog is displayed.

CVApplicationsToMonitorDialog

We need to add our target program to the list of programs to monitor. Click Add.... The Application To Monitor dialog is displayed. Choose our launch program testAppOtherProcessCpp.exe using Browse.... C++ Coverage Validator will identify any other executables in the same folder and add these to the list of target programs you may want to monitor. You can remove any programs you don't want to monitor with the Remove and Remove All buttons. Your dialog should look like the one shown below.

CVApplicationToMonitorDialog

Click OK to close the Application To Monitor dialog.

Click OK to close the Applications To Monitor dialog.

The Application to monitor combo will now have additional entries in it.


Select testAppFromOtherProcess.exe in the Application to monitor combo. Leave the launch count set to 1. The first time testAppFromOtherProcess.exe is launched it will be monitored.


Click Go! to start the parent process.

CVApplicationToMonitorParentProcess

You will notice that C++ Coverage Validator is not collecting data. Now click on the Launch Child Process button. The child process is launched, C++ Coverage Validator recognises the parent process is launching a child process that is configured to be monitored and has the correct launch count (this is the first time it is being launched and the launch count is set to "1") - the child process is instrumented for code coverage. You can see the instrumentation progress in the title bar and pretty soon code coverage statistics are being displayed by C++ Coverage Validator.

CVCodeCoverageResults

Command Line, example for C++

OK, that's wonderful, we can collect code coverage using the GUI to launch one program and collect data from a child process. All without any coding. Super. So how do we do that from the command line? Glad you asked!

"c:\C++ Coverage Validator\coverageValidator.exe" 
-program "e:\test\release\testAppOtherProcessCpp.exe"
-directory "e:\test\release" 
-programToMonitor "e:\test\release\testAppFromOtherProcess.exe" 

How does this work?

  • -directory. Specify the startup directory.
  • -program. Specify the program to launch.
  • -programToMonitor. Specify the program to that will be monitored for code coverage.

Very straightforward and simple. Paths must have quotes if they contain spaces. If in doubt always use quotes. Note also that where you've installed C++ Coverage Validator will be different, most likely in C:\Program Files (x86)\Software Verification. We shortened it for the example to make it fit the page.

Java

The parent program in Java is very simple. It takes any arguments passed to it and passes them to the target program.

import java.io.IOException;
import java.lang.ProcessBuilder;
import java.util.ArrayList;

public class testAppFromOtherProcessJava 
{
    public static void main(String[] args) throws IOException, InterruptedException
	{
		String			target = "e:\\om\\c\\testApps\\testAppFromOtherProcess\\Release\\testAppFromOtherProcess.exe";
        	ProcessBuilder	p = new ProcessBuilder();

		// add the args to be passed to the target program, unlike C/C++, args[0] is not the program name

		ArrayList	targetArgs;

		targetArgs = new ArrayList();
		targetArgs.add(target);
		for(int i = 0; i < args.length; i++)
		{
			targetArgs.add(args[i]);
		}

		p.command(targetArgs);

		// run the process, wait for it to complete and report the value calculated

		Process			proc;

	        proc = p.start();
		proc.waitFor();

		System.out.println("Result: " + proc.exitValue()); 
    }
}

You can compile this program with this simple command line. This assumes you have a Java Development Kit installed and javac.exe on the command line.

javac testAppFromOtherProcessJava.java

Configuring the target Java program

As with the C++ target program we need to tell C++ Coverage Validator about the target program and the program that is going to launch it. We're running a Java program so the executable to launch is the Java runtime. Click the Browse... button and select the Java runtime you are using.

CVLaunchDialogJava

The launch directory is automatically configured to be the same as the launch program. In the case of a Java program, that is almost certainly incorrect. We're going to choose the directory where our Java class is located. Click the Dir... button and choose that directory.

CVLaunchDialogJavaDirectory

We also need to tell the Java runtime what class to execute. This is provided as an argument to the program being run (the Java rutnime). In the arguments field, type the name of the class. In this case testAppFromOtherProcessJava (without the .class extension).

CVLaunchDialogJavaArguments

To allow us to monitor other programs we need to edit the list of applications we can monitor. Click the Edit... button to the right of the Application to monitor combo box. The Applications To Monitor dialog is displayed.

CVApplicationsToMonitorDialog

We need to add our target program to the list of programs to monitor. Click Add.... The Application To Monitor dialog is displayed. Choose the Java runtime java.exe using Browse.... C++ Coverage Validator will identify any other executables in the same folder and add these to the list of target programs you may want to monitor. You can remove any programs you don't want to monitor with the Remove and Remove All buttons. We now need to add the target program to the list of programs we want to monitor. Click Add... and select testAppFromOtherProcess.exe. Your dialog should look like the one shown below.

CVApplicationToMonitorDialogJava

Select testAppFromOtherProcess.exe in the Application to monitor combo. Leave the launch count set to 1. The first time testAppFromOtherProcess.exe is launched it will be monitored. Click Go! to start the parent process.

CVLaunchDialogApplicationToMonitorJava

The Java process launches testAppFromOtherProcess.exe immediately. As such you will notice that C++ Coverage Validator starts collecting code coverage almost instantly because it has recognised the Java process is launching a child process that is configured to be monitored and has the correct launch count.

CVCodeCoverageResultsJava

Command Line, example for Java

As you can see, it's slightly more complicated for Java than for C++, but only because the Java runtime is located in a different folder than the test executable and because we also have to specify a Java class to execute. We still managed to collect code coverage for a child process of a just in time compiled language without any coding.

Of course, you now want to know how to do this for the command line. Is this any more complicated than for the C++ example? No! Just as easy. Here's how you do it:

"c:\C++ Coverage Validator\coverageValidator.exe" 
-program "c:\program files\java\jdk1.8.0_121\bin\java.exe"
-directory "e:\test\release" 
-arg testAppFromOtherProcessJava
-programToMonitor "e:\test\release\testAppFromOtherProcess.exe"

How does this work?

  • -arg. Specify an argument to the program to launch. In this example this specifies the Java class to execute.
  • -directory. Specify the startup directory.
  • -program. Specify the program to launch. In this example this specifies the Java runtime.
  • -programToMonitor. Specify the program to that will be monitored for code coverage.

Use as many -arg options as you need. We only used one because that's all we need for the example.

Python

The parent program in Python is very simple.

import sys
import subprocess

cmdLine = r"E:\om\c\testApps\testAppFromOtherProcess\Release\testAppFromOtherProcess.exe"
for arg in range(1, len(sys.argv)):
  cmdLine += " "
  cmdLine += sys.argv[arg]
  
subprocess.call(cmdLine, stdin=None, stdout=None, stderr=None, shell=False)

Configuring the target Python program

As with the C++ target program we need to tell C++ Coverage Validator about the target program and the program that is going to launch it. We're running a Python program so the executable to launch is the Python interpreter. Click the Browse... button and select the Python interpreter you are using.

CVLaunchDialogPython

The launch directory is automatically configured to be the same as the launch program. In the case of a Python program, that is almost certainly incorrect. We're going to choose the directory where our Python script is located. Click the Dir... button and choose that directory.

CVLaunchDialogPythonDirectory

We also need to tell Python what script to launch. This is provided as an argument to the program being run (the Python interpreter). In the arguments field, type the name of the script. In this case testAppFromOtherProcess.py.

CVLaunchDialogPythonArguments

To allow us to monitor other programs we need to edit the list of applications we can monitor. Click the Edit... button to the right of the Application to monitor combo box. The Applications To Monitor dialog is displayed.

CVApplicationsToMonitorDialog

We need to add our target program to the list of programs to monitor. Click Add.... The Application To Monitor dialog is displayed. Choose the Python interpreter python.exe using Browse.... C++ Coverage Validator will identify any other executables in the same folder and add these to the list of target programs you may want to monitor. You can remove any programs you don't want to monitor with the Remove and Remove All buttons. We now need to add the target program to the list of programs we want to monitor. Click Add... and select testAppFromOtherProcess.exe. Your dialog should look like the one shown below.

CVApplicationToMonitorDialogPython

Select testAppFromOtherProcess.exe in the Application to monitor combo. Leave the launch count set to 1. The first time testAppFromOtherProcess.exe is launched it will be monitored. Click Go! to start the parent process.

CVLaunchDialogPythonAppToMonitor

The Python process launches testAppFromOtherProcess.exe immediately. As such you will notice that C++ Coverage Validator starts collecting code coverage almost instantly because it has recognised the Python process is launching a child process that is configured to be monitored and has the correct launch count.

CVCodeCoverageResultsPython

Command Line, example for Python

As you can see, it's slightly more complicated for Python than for C++, but only because the Python interpreter is located in a different folder than the test executable and because we also have to specify a Python script. We still managed to collect code coverage for a child process of a scripted language without any coding.

Of course, you now want to know how to do this for the command line. Is this any more complicated than for the C++ example? No! Just as easy. Here's how you do it:

"c:\C++ Coverage Validator\coverageValidator.exe" 
-program "c:\python36-32\python.exe"
-directory "e:\test\release" 
-arg testAppFromOtherProcess.py
-programToMonitor "e:\test\release\testAppFromOtherProcess.exe"

How does this work?

  • -arg. Specify an argument to the program to launch. In this example this specifies the Python script to run.
  • -directory. Specify the startup directory.
  • -program. Specify the program to launch. In this example this specifies the Python interpreter.
  • -programToMonitor. Specify the program to that will be monitored for code coverage.

Use as many -arg options as you need. We only used one because that's all we need for the example.

Conclusion

We've demonstrated how to monitor code coverage in a target program launched from C++, Java and Python, using both the GUI and the command line. Each example is slightly different, showing you the changes required for each situation. If you have any questions please email support@softwareverify.com

You can download the C++, Java and Python code used in these examples here.

Speeding up merging with Coverage Validator

By , December 16, 2015 11:43 am

Coverage Validator has an option to automatically merge the coverage results of the current session with a central session. This allows you to get an automatic overview of all code coverage without having to merge the results yourself.

Some people use this but some people prefer to record individual sessions then merge the sessions later. This is effective but the merging stage can be slow as to merge two files you need to start Coverage Validator, load two sessions then merge them then save the result. This is known as pairwise merging. Even with the command line support for this, this is time consuming.

-mergeMultiple

To speed this up we’ve just added the -mergeMultiple command line option.

-mergeMultiple takes one argument, a filename. The file contains the list of session files to merge, one per line.

Example command line:
-mergeMultiple e:\cv_merge_multiple.txt -mergeSessions -saveMergeResult e:\cv_merge_result.cvm -hideUI

Example merge multiple file:
e:\cv_help.cvm
e:\cv_red.cvm
e:\cv_green.cvm
e:\cv_blue.cvm
e:\cv_magenta.cvm
e:\cv_cyan.cvm
e:\this_file_doesnt_exist.cvm

Files that don’t exist are not merged. They do not cause any error conditions. This is deliberate – to provide fault tolerance if an intended merge target doesn’t exist for some reason. The last thing you want is a failed merge.

Performance improvement

We’ve tested this with one of our customers that could benefit from merging multiple files in one go. The performance improvement for merging 84 files (resulting in a 3.66GB merged session file – 64 bit Coverage Validator) is a speed up 8 times (pairwise merge time was 32 minutes, with -mergeMultiple the merge time is now 4 minutes).

64 bit C++ software tool Beta Tests are complete.

By , January 9, 2014 1:33 pm

We recently closed the beta tests for the 64 bit versions of C++ Coverage Validator, C++ Memory Validator, C++ Performance Validator and C++ Thread Validator.

We launched the software on 2nd January 2014. A soft launch, no fanfare, no publicity. We just wanted to make the software available and then contact all the beta testers so that we could honour our commitments made at the start of the beta test.

Those commitments were to provide a free single user licence to any beta tester that provided feedback, usage reports, bugs reports, etc about the software. This doesn’t include anyone that couldn’t install the software because they used the wrong licence key!

We’ve written a special app here that we can use to identify all email from beta test participants and allow us to evaluate that email for beta test feedback criteria. It’s saved us a ton of time and drudge work even though writing this extension to the licence manager software took a few days. It was interesting using the tool and seeing who provided feedback and how much.

We’ve just sent out the licence keys and download instructions to all those beta testers that were kind enough to take the time to provide feedback, bug reports etc. to us. A few people went the extra mile. These people bombarded us with email containing huge bugs, trivial items and everything in between. Two of them, we were on the verge of flying out to their offices when we found some useful software that allowed to us to remotely debug their software. Special mentions go to:

Bengt Gunne (Mimer.com)
Ciro Ettorre (Mechworks.com)
Kevin Ernst (Bentley.com)

We’re very grateful for everyone taking part in the beta test. Thank you very much.

Why didn’t I get a free licence?

If you didn’t receive a free licence and you think you did provide feedback, please contact us. It’s always possible that a few people slipped through our process of identifying people.

Dang! I knew I should’ve provided feedback

If you didn’t provide us with any feedback, check your inbox. You’ll find a 50% off coupon for the tool that you tested.

Code coverage comparison

By , May 9, 2013 3:19 pm

Recently we’ve had a flurry of customers wanting the ability to compare the code coverage of their application.

This sounded like a good idea so we asked these customers why they wanted to be able to compare different code coverage runs. The answers were varied:


  • I want to be able to take a known good baseline and compare it to a run with a regression in it.

  • I’ve inherited a legacy application and we want to understand the code paths for each given test.

  • I’ve inherited a legacy application and we know nothing about it. We’re testing it with appropriate input data and want to see which code executes.

For these customers being able to compare their code coverage runs is a big deal. Being able to compare your code coverage visually rather than just know that Session A is better than Session B allows you to quickly and easily identify exactly the area to focus on. It was such a compelling idea we’ve implemented code coverage comparison for all versions of Coverage Validator. This results in changes to the Session Manager and some new user interfaces.

Session Manager Dialog

Session Manager Dialog

The Session Manager has an additional Compare… button which will display the Session Comparison dialog.

Session Comparison Dialog

Session Compare Dialog

The Session Comparison allows you to choose two sessions and then view the comparisons. Clicking the Compare… button will display the Code Coverage Comparison viewer.

Code Coverage Comparison Viewer

Session Comparison Viewer

The code coverage comparison viewer is split into two parts, separated by a splitter control. The top part lists each file that is in each session being compared. The bottom part displays the source code coverage for the baseline session and for the comparison session. You can choose to view all code coverage data for these files or to only view the files that are different between baseline and comparison sessions.

You can compare different executables if that makes sense – for people testing related unit tests this can be a valid thing to do.

The display automatically selects the first file that contains code coverage differences and displays the baseline and comparison files in the bottom window at the location of the first difference in the file. As with our other code coverage displays the source code is highlighted to indicate which lines are visited/not visited and annotated so that you can determine line numbers and visit counts.

UX Improvements for Coverage Validator

By , August 3, 2012 11:36 am

We recently released new versions of the Coverage Validator tools for all languages.

The main reason for this release was to make the tools more usable and make using them more satisfying. This work was inspired by some user experience research we commissioned with Think UI.

We’re so happy with these improvements we thought we’d share them so that you can learn from our improvements. We’re not finished with the Coverage Validator tools. This is just the start of changes to come.

I’m specifically going to talk about C++ Coverage Validator, but these improvements cut across all our Coverage Validator tools. Some of the improvements cut across all our development tools.

Summary Dashboard

The first thing a user of Coverage Validator sees is the summary dashboard.

The previous version of this dashboard was a grid with sparse use of graphics and lots of text. You had to read the text to understand what was happening with the code coverage for the test application. Additional comments and filter status information was displayed in right hand columns.

Coverage Validator old dashboard

The new version of this dashboard is split into two areas. The top area contains a dial for each metric reported. Each dial displays three items of information: Number of Items, Number of Items visited. How many items are 100% visited. This is done by means of an angular display for one value and a radial display for another value. A couple of the dials are pie charts.

The bottom area of the dashboard displays information that is relevant to the recorded session. Any value that can be viewed or edited is easily reachable via a hyperlink.

Coverage Validator new dashboard

The result of these changes are that the top area makes it easy to glance at a coverage report and instantly know which session has better coverage than another session. You don’t need to read the text to work it out. The bottom area draws attention to instrumentation failures (missing debug information, etc) and which filters are enabled etc. By exposing this information in this way more functionality of Coverage Validator is exposed to the user of the software.

Coverage Dials

We developed a custom control to display each coverage dial.

A coverage dial displays both the amount of data that has been visited, the amount of data that is unvisited and the amount that has been completely visited. For metrics that do not have a partial/complete status the dial just displays as a two part pie chart. An additional version displays data as a three part pie chart. This last version is used for displaying Unit Test results (success, failure, error).

Coverage dial directories Coverage dial unit tests

The difference between unvisited coverage and visited coverage is displayed using an angular value. Items that have been completely visited (100% coverage) are displayed using a radial value emanating from the centre of the dial. Additional information is displayed by a graded colour change between the 100% coverage area and the circumference of the circle to indicate the level of coverage in partially covered areas.

The coverage dial provides tooltips and hyperlinks for each section of the coverage dial.

Dashboard Status

The dashboard status area shows informational messages about the status of code instrumentation, a filter summary, unit test status and session merging status. Most items are either viewable or editable by clicking a hyperlink.

Dashboard status

To implement the hyperlink we created a custom control that supported hyperlinks with support for email hyper links, web hyper links and C++ callbacks. This provides maximal functionality. The hyperlinks are now used in many places in our tools – About box, evaluation feedback box, error report boxes, data export confirmation boxes, etc.

Coverage Scrollbar

We’ve also made high level overview data available on all the main displays (Coverage, Functions, Branches, Unit Tests, Files and Lines) so that you can get an overview of the coverage of each file/function/branch/etc without the need to scroll the view.

We thought of drawing the coverage data onto the scrollbar. Unfortunately this means that you would need an ownerdrawn scrollbar, but Windows does not provide such a thing. An option was to use a custom scroll bar implementation. But doing that would mean having to cater to every different type of Windows scrollbar implementation. We didn’t think that was a good idea. As such we’ve chosen to draw the coverage overview next to the scroll bar.

Coverage scroll bar

Editor Scrollbar

Similarly to the overview for each type of data we also provide a high level overview for the source code editor.

Editor code coverage

Directory Filter

Coverage Validator provides the ability to filter data on a variety of attributes. One of these is the directory in which a file is found. For example if the file was e:\om\c\svlWebAPI\webapi\ProductVersion\action.cpp the filter directory would be e:\om\c\svlWebAPI\webapi\ProductVersion\.

This is useful functionality but Coverage Validator allows you to filter on any directory. In complex software applications it’s quite possible that you would want to filter on a parent directory or a root directory. That would give the following directories for the example above.

e:
e:\om
e:\om\c
e:\om\c\svlWebAPI
e:\om\c\svlWebAPI\webapi
e:\om\c\svlWebAPI\webapi\ProductVersion

The solution to this problem is to create the context menu dynamically rather than use a preformed context menu stored in application resources. Additionally it is more likely that the current directory will be filtered rather than the parent, so it makes sense to reverse the order of the directories, going from leaf to root.

Coverage filter directory context menu

Instrumentation Preferences

The instrumentation preferences dialog is displayed to the user the first time that Coverage Validator starts. The purpose of this dialog is to configure the initial way coverage data is collected. This provides a range of performance levels from fast to slow and incomplete visit counts to complete visit counts. Both options affect the speed of execution of the software. Given that time to complete is an important cost for this is an option that should be chosen carefully.

Previous versions of the software displayed a wordy dialog containing two questions. Each question had two choices.

Coverage instrumentation preferences (old)

The new version of the instrumentation preference dialog has replaced the questions with a sliding scale. Two questions with two choices is effectively four combinations. The instrumentation level sliding scale has four values. As the slider value changes the text below the slider changes to provide a brief explanation of the instrumentation level chosen.

An additional benefit is that the previous version only implied the recommended values (we preset them). The new version also implies the recommended value (also preset) but also explicitly indicates the recommended instrumentation level.

This new design has less words, less visual clutter, is easier to use and presents less cognitive load on the user.

Coverage instrumentation preferences (new)

Export Confirmation

Coverage Validator provides options to export data to HTML and XML. A common desire after exporting is to view the exported data. Previous versions of Coverage Validator overlooked this desire, no doubt causing frustrating for some users. We’ve rectified this with a confirmation dialog displayed after exporting data. The options are to view the exported file or to view the contents of the folder holding the exported data. An option to never display the dialog again is also provided.

Coverage export confirmation

Debug Information

The previous version of the debug information dialog was displayed to the user at the end of instrumentation. After the user dismissed the dialog there was no way to view the data again. The dialog was simply a warning of which DLLs had no debug information. The purpose of this was to alert the user as to why a given DLL had no code coverage (debug information is required to provide code coverage).

The new version of debug information dialog is available from the dashboard. The new dialog displays all DLLs and their status. Status information indicates if debug information was found or not found and if Coverage Validator is interested in that DLL, and if not interested, why it is not interested. This allows you to easily determine if a DLL filter is causing the DLL to be ignored for code coverage.

Coverage Debug Information

When the dialog is displayed a Learn More… link is available. This presents a simple dialog providing some information about debug information for debug and release builds. We’ve used a modified static control on these dialogs to provide useful bold text in dialogs (something that you can’t do with plain MFC applications). It’s a small thing but improves structure of the dialog. This text was displayed as part of the previous debug information dialog. Removing this text to a separate dialog chunks the information making it more accessible.

Coverage Debug Information Learn More

There is more to be done with this part of the software but this is an improvement compared to previous versions.

Tips Dialog

Coverage Validator has always had a “Tip of the day” dialog. This is something of a holdover from earlier forms of application development. We’d never really paid much attention to it, to how it functioned, how it behaved and what it communicated.

Tip of the day dialog

We’re planning to completely overhaul this dialog but that is a longer term activity. As such in this revision we’ve just made some smaller scale changes that still have quite an impact.

Tips dialog

The first change is that the previous “Tip of the day” dialog was displayed at application startup but with the new version the “Tips” dialog is no longer displayed at application startup. The tips dialog is now displayed when you launch an application and are waiting for instrumentation to complete. This means you get displayed tips during “dead time” that you can’t really use effectively – you’re waiting for the tool. The tips dialog is still available from the Help menu as was the previous Tip of the day dialog.

The second change is that the new Tips dialog is modeless. The previous Tip of the day dialog was modal. This means you can leave the dialog displayed and move it out of the way. You don’t have to dismiss it.

We’ve done away with the icon and replaced it with a tip number so you know which tip you are viewing. Tips are no longer viewed sequentially (Next Tip) but in a random order. At first this seems like a crazy thing to do. But when you try it, it actually increases your engagement. You’re wondering how many tips there are and which one you’ll get next. Hipmunk was an inspiration for this – they do something similar when calculating your plane flights (I hadn’t seen this when I used Hipmunk but Roger from ThinkUI had seen it).

There is more to be done with this part of the software but this is a useful improvement until our completely reworked Tips dialog is ready for release.

Conclusion

All of the changes have been made to improve and simplify the way information is communicated to the user of Coverage Validator. Improved graphics displays, interactive dashboards, better data dialogs, hyperlinks and occasional use of bold text all improve the user experience.

We’re not finished improving Coverage Validator. These are just our initial round of improvements.

Panorama Theme by Themocracy