Rss Feed
Tweeter button
Facebook button
Technorati button
Reddit button
Myspace button
Linkedin button
Webonews button
Delicious button
Digg button
Flickr button
Stumbleupon button
Newsvine button

Category: Coverage

Getting code coverage for a child process?

By , May 31, 2017 5:43 pm

In this blog post I’m going to explain how to collect code coverage for a process that is launched by another process. We’ll be using C++ Coverage Validator to collect the code coverage.

For example you may have a control process that launches helper programs to do specific jobs and you wish to collect code coverage data for one of the helper programs. I’m first going to show how you do this with the GUI, then I’ll show you how to do this with the command line.

For the purposes of this blog post I’m going to use a test program called testAppFromOtherProcess.exe as the child program and testAppOtherProcessCpp.exe as the parent process. Once I’ve explained this for C++, I’ll also provide examples for programs launched from Java and for programs launched from Python.

The test program

The test program is simple. It takes two numbers and calculates the sum of all the products. If less than two arguments are supplied they default to 10.

int _tmain(int argc, _TCHAR* argv[])
{
	int	nx, ny;
	int	x, y;
	int	v;

	nx = 10;
	ny = 10;
	v = 0;

	if (argc == 2)
	{
		nx = _tcstol(argv[1], NULL, 10);
	}
	else if (argc >= 3)
	{
		nx = _tcstol(argv[1], NULL, 10);
		ny = _tcstol(argv[2], NULL, 10);
	}

	for(y = 0; y < ny; y++)
	{
		for(x = 0; x < nx; x++)
		{
			v += (x + 1) * (y + 1);
		}
	}

	return v;
}

The parent C++ program

The parent C++ program is a simple MFC dialog that collects two values and launches the test program. The code for launching the child process looks like this:

void CtestAppOtherProcessCppDlg::OnBnClickedOk()
{
	// get data values

	CString	str1, str2;
	DWORD	v = 0;

	GetDlgItemText(IDC_EDIT_COUNT1, str1);
	GetDlgItemText(IDC_EDIT_COUNT2, str2);

	// create command line

	CString	commandline;

	commandline += _T("testAppFromOtherProcess.exe");
	commandline += _T(" ");
	commandline += str1;
	commandline += _T(" ");
	commandline += str2;

	// run child process

	STARTUPINFO         stStartInfo;
	PROCESS_INFORMATION stProcessInfo;

	memset(&stStartInfo, 0, sizeof(STARTUPINFO));
	memset(&stProcessInfo, 0, sizeof(PROCESS_INFORMATION));

	stStartInfo.cb = sizeof(STARTUPINFO);
	stStartInfo.dwFlags = STARTF_USESHOWWINDOW;
	stStartInfo.wShowWindow = SW_HIDE;

	int	bRet;

	bRet = CreateProcess(NULL,
			(TCHAR *)(const TCHAR *)commandline,
			NULL,
			NULL,
			FALSE,
			0,
			NULL,
			NULL,
			&stStartInfo,
			&stProcessInfo);
	if (bRet)
	{
		// wait until complete then get exit code

		WaitForSingleObject(stProcessInfo.hProcess, INFINITE);

		GetExitCodeProcess(stProcessInfo.hProcess, &v);

		// tidy up

		CloseHandle(stProcessInfo.hProcess);
		CloseHandle(stProcessInfo.hThread);
	}

	// display result

	SetDlgItemInt(IDC_STATIC_VALUE, v, FALSE);
}

Configuring the target C++ program

Before we can collect code coverage we need to tell C++ Coverage Validator about the target program and the program that is going to launch it. We do this from the launch dialog (or launch wizard). From the launch dialog, select the program to launch using the Browse... button and selecting the file with the File dialog. Once a file has been chosen a default value will be selected for the Application to Monitor. This is the same program as you just selected with the File dialog.

CVLaunchDialogApplicationToMonitor

To allow us to monitor other programs we need to edit the list of applications we can monitor. Click the Edit... button to the right of the Application to monitor combo box. The Applications To Monitor dialog is displayed.

CVApplicationsToMonitorDialog

We need to add our target program to the list of programs to monitor. Click Add.... The Application To Monitor dialog is displayed. Choose our launch program testAppOtherProcessCpp.exe using Browse.... C++ Coverage Validator will identify any other executables in the same folder and add these to the list of target programs you may want to monitor. You can remove any programs you don't want to monitor with the Remove and Remove All buttons. Your dialog should look like the one shown below.

CVApplicationToMonitorDialog

Click OK to close the Application To Monitor dialog.

Click OK to close the Applications To Monitor dialog.

The Application to monitor combo will now have additional entries in it. Select testAppFromOtherProcess.exe in the Application to monitor combo. Leave the launch count set to 1. The first time testAppFromOtherProcess.exe is launched it will be monitored. Click Go! to start the parent process.

CVApplicationToMonitorParentProcess

You will notice that C++ Coverage Validator is not collecting data. Now click on the Launch Child Process button. The child process is launched, C++ Coverage Validator recognises the parent process is launching a child process that is configured to be monitored and has the correct launch count (this is the first time it is being launched and the launch count is set to "1") - the child process is instrumented for code coverage. You can see the instrumentation progress in the title bar and pretty soon code coverage statistics are being displayed by C++ Coverage Validator.

CVCodeCoverageResults

Command Line, example for C++

OK, that's wonderful, we can collect code coverage using the GUI to launch one program and collect data from a child process. All without any coding. Super. So how do we do that from the command line? Glad you asked!

"c:\C++ Coverage Validator\coverageValidator.exe" 
-program "e:\test\release\testAppOtherProcessCpp.exe"
-directory "e:\test\release" 
-programToMonitor "e:\test\release\testAppFromOtherProcess.exe" 

How does this work?

  • -directory. Specify the startup directory.
  • -program. Specify the program to launch.
  • -programToMonitor. Specify the program to that will be monitored for code coverage.

Very straightforward and simple. Paths must have quotes if they contain spaces. If in doubt always use quotes. Note also that where you've installed C++ Coverage Validator will be different, most likely in C:\Program Files (x86)\Software Verification. We shortened it for the example to make it fit the page.

Java

The parent program in Java is very simple. It takes any arguments passed to it and passes them to the target program.

import java.io.IOException;
import java.lang.ProcessBuilder;
import java.util.ArrayList;

public class testAppFromOtherProcessJava 
{
    public static void main(String[] args) throws IOException, InterruptedException
	{
		String			target = "e:\\om\\c\\testApps\\testAppFromOtherProcess\\Release\\testAppFromOtherProcess.exe";
        	ProcessBuilder	p = new ProcessBuilder();

		// add the args to be passed to the target program, unlike C/C++, args[0] is not the program name

		ArrayList	targetArgs;

		targetArgs = new ArrayList();
		targetArgs.add(target);
		for(int i = 0; i < args.length; i++)
		{
			targetArgs.add(args[i]);
		}

		p.command(targetArgs);

		// run the process, wait for it to complete and report the value calculated

		Process			proc;

	        proc = p.start();
		proc.waitFor();

		System.out.println("Result: " + proc.exitValue()); 
    }
}

You can compile this program with this simple command line. This assumes you have a Java Development Kit installed and javac.exe on the command line.

javac testAppFromOtherProcessJava.java

Configuring the target Java program

As with the C++ target program we need to tell C++ Coverage Validator about the target program and the program that is going to launch it. We're running a Java program so the executable to launch is the Java runtime. Click the Browse... button and select the Java runtime you are using.

CVLaunchDialogJava

The launch directory is automatically configured to be the same as the launch program. In the case of a Java program, that is almost certainly incorrect. We're going to choose the directory where our Java class is located. Click the Dir... button and choose that directory.

CVLaunchDialogJavaDirectory

We also need to tell the Java runtime what class to execute. This is provided as an argument to the program being run (the Java rutnime). In the arguments field, type the name of the class. In this case testAppFromOtherProcessJava (without the .class extension).

CVLaunchDialogJavaArguments

To allow us to monitor other programs we need to edit the list of applications we can monitor. Click the Edit... button to the right of the Application to monitor combo box. The Applications To Monitor dialog is displayed.

CVApplicationsToMonitorDialog

We need to add our target program to the list of programs to monitor. Click Add.... The Application To Monitor dialog is displayed. Choose the Java runtime java.exe using Browse.... C++ Coverage Validator will identify any other executables in the same folder and add these to the list of target programs you may want to monitor. You can remove any programs you don't want to monitor with the Remove and Remove All buttons. We now need to add the target program to the list of programs we want to monitor. Click Add... and select testAppFromOtherProcess.exe. Your dialog should look like the one shown below.

CVApplicationToMonitorDialogJava

Select testAppFromOtherProcess.exe in the Application to monitor combo. Leave the launch count set to 1. The first time testAppFromOtherProcess.exe is launched it will be monitored. Click Go! to start the parent process.

CVLaunchDialogApplicationToMonitorJava

The Java process launches testAppFromOtherProcess.exe immediately. As such you will notice that C++ Coverage Validator starts collecting code coverage almost instantly because it has recognised the Java process is launching a child process that is configured to be monitored and has the correct launch count.

CVCodeCoverageResultsJava

Command Line, example for Java

As you can see, it's slightly more complicated for Java than for C++, but only because the Java runtime is located in a different folder than the test executable and because we also have to specify a Java class to execute. We still managed to collect code coverage for a child process of a just in time compiled language without any coding.

Of course, you now want to know how to do this for the command line. Is this any more complicated than for the C++ example? No! Just as easy. Here's how you do it:

"c:\C++ Coverage Validator\coverageValidator.exe" 
-program "c:\program files\java\jdk1.8.0_121\bin\java.exe"
-directory "e:\test\release" 
-arg testAppFromOtherProcessJava
-programToMonitor "e:\test\release\testAppFromOtherProcess.exe"

How does this work?

  • -arg. Specify an argument to the program to launch. In this example this specifies the Java class to execute.
  • -directory. Specify the startup directory.
  • -program. Specify the program to launch. In this example this specifies the Java runtime.
  • -programToMonitor. Specify the program to that will be monitored for code coverage.

Use as many -arg options as you need. We only used one because that's all we need for the example.

Python

The parent program in Python is very simple.

import sys
import subprocess

cmdLine = r"E:\om\c\testApps\testAppFromOtherProcess\Release\testAppFromOtherProcess.exe"
for arg in range(1, len(sys.argv)):
  cmdLine += " "
  cmdLine += sys.argv[arg]
  
subprocess.call(cmdLine, stdin=None, stdout=None, stderr=None, shell=False)

Configuring the target Python program

As with the C++ target program we need to tell C++ Coverage Validator about the target program and the program that is going to launch it. We're running a Python program so the executable to launch is the Python interpreter. Click the Browse... button and select the Python interpreter you are using.

CVLaunchDialogPython

The launch directory is automatically configured to be the same as the launch program. In the case of a Python program, that is almost certainly incorrect. We're going to choose the directory where our Python script is located. Click the Dir... button and choose that directory.

CVLaunchDialogPythonDirectory

We also need to tell Python what script to launch. This is provided as an argument to the program being run (the Python interpreter). In the arguments field, type the name of the script. In this case testAppFromOtherProcess.py.

CVLaunchDialogPythonArguments

To allow us to monitor other programs we need to edit the list of applications we can monitor. Click the Edit... button to the right of the Application to monitor combo box. The Applications To Monitor dialog is displayed.

CVApplicationsToMonitorDialog

We need to add our target program to the list of programs to monitor. Click Add.... The Application To Monitor dialog is displayed. Choose the Python interpreter python.exe using Browse.... C++ Coverage Validator will identify any other executables in the same folder and add these to the list of target programs you may want to monitor. You can remove any programs you don't want to monitor with the Remove and Remove All buttons. We now need to add the target program to the list of programs we want to monitor. Click Add... and select testAppFromOtherProcess.exe. Your dialog should look like the one shown below.

CVApplicationToMonitorDialogPython

Select testAppFromOtherProcess.exe in the Application to monitor combo. Leave the launch count set to 1. The first time testAppFromOtherProcess.exe is launched it will be monitored. Click Go! to start the parent process.

CVLaunchDialogPythonAppToMonitor

The Python process launches testAppFromOtherProcess.exe immediately. As such you will notice that C++ Coverage Validator starts collecting code coverage almost instantly because it has recognised the Python process is launching a child process that is configured to be monitored and has the correct launch count.

CVCodeCoverageResultsPython

Command Line, example for Python

As you can see, it's slightly more complicated for Python than for C++, but only because the Python interpreter is located in a different folder than the test executable and because we also have to specify a Python script. We still managed to collect code coverage for a child process of a scripted language without any coding.

Of course, you now want to know how to do this for the command line. Is this any more complicated than for the C++ example? No! Just as easy. Here's how you do it:

"c:\C++ Coverage Validator\coverageValidator.exe" 
-program "c:\python36-32\python.exe"
-directory "e:\test\release" 
-arg testAppFromOtherProcess.py
-programToMonitor "e:\test\release\testAppFromOtherProcess.exe"

How does this work?

  • -arg. Specify an argument to the program to launch. In this example this specifies the Python script to run.
  • -directory. Specify the startup directory.
  • -program. Specify the program to launch. In this example this specifies the Python interpreter.
  • -programToMonitor. Specify the program to that will be monitored for code coverage.

Use as many -arg options as you need. We only used one because that's all we need for the example.

Conclusion

We've demonstrated how to monitor code coverage in a target program launched from C++, Java and Python, using both the GUI and the command line. Each example is slightly different, showing you the changes required for each situation. If you have any questions please email support@softwareverify.com

You can download the C++, Java and Python code used in these examples here.

Share

Speeding up merging with Coverage Validator

By , December 16, 2015 11:43 am

Coverage Validator has an option to automatically merge the coverage results of the current session with a central session. This allows you to get an automatic overview of all code coverage without having to merge the results yourself.

Some people use this but some people prefer to record individual sessions then merge the sessions later. This is effective but the merging stage can be slow as to merge two files you need to start Coverage Validator, load two sessions then merge them then save the result. This is known as pairwise merging. Even with the command line support for this, this is time consuming.

-mergeMultiple

To speed this up we’ve just added the -mergeMultiple command line option.

-mergeMultiple takes one argument, a filename. The file contains the list of session files to merge, one per line.

Example command line:
-mergeMultiple e:\cv_merge_multiple.txt -mergeSessions -saveMergeResult e:\cv_merge_result.cvm -hideUI

Example merge multiple file:
e:\cv_help.cvm
e:\cv_red.cvm
e:\cv_green.cvm
e:\cv_blue.cvm
e:\cv_magenta.cvm
e:\cv_cyan.cvm
e:\this_file_doesnt_exist.cvm

Files that don’t exist are not merged. They do not cause any error conditions. This is deliberate – to provide fault tolerance if an intended merge target doesn’t exist for some reason. The last thing you want is a failed merge.

Performance improvement

We’ve tested this with one of our customers that could benefit from merging multiple files in one go. The performance improvement for merging 84 files (resulting in a 3.66GB merged session file – 64 bit Coverage Validator) is a speed up 8 times (pairwise merge time was 32 minutes, with -mergeMultiple the merge time is now 4 minutes).

Share

64 bit C++ software tool Beta Tests are complete.

By , January 9, 2014 1:33 pm

We recently closed the beta tests for the 64 bit versions of C++ Coverage Validator, C++ Memory Validator, C++ Performance Validator and C++ Thread Validator.

We launched the software on 2nd January 2014. A soft launch, no fanfare, no publicity. We just wanted to make the software available and then contact all the beta testers so that we could honour our commitments made at the start of the beta test.

Those commitments were to provide a free single user licence to any beta tester that provided feedback, usage reports, bugs reports, etc about the software. This doesn’t include anyone that couldn’t install the software because they used the wrong licence key!

We’ve written a special app here that we can use to identify all email from beta test participants and allow us to evaluate that email for beta test feedback criteria. It’s saved us a ton of time and drudge work even though writing this extension to the licence manager software took a few days. It was interesting using the tool and seeing who provided feedback and how much.

We’ve just sent out the licence keys and download instructions to all those beta testers that were kind enough to take the time to provide feedback, bug reports etc. to us. A few people went the extra mile. These people bombarded us with email containing huge bugs, trivial items and everything in between. Two of them, we were on the verge of flying out to their offices when we found some useful software that allowed to us to remotely debug their software. Special mentions go to:

Bengt Gunne (Mimer.com)
Ciro Ettorre (Mechworks.com)
Kevin Ernst (Bentley.com)

We’re very grateful for everyone taking part in the beta test. Thank you very much.

Why didn’t I get a free licence?

If you didn’t receive a free licence and you think you did provide feedback, please contact us. It’s always possible that a few people slipped through our process of identifying people.

Dang! I knew I should’ve provided feedback

If you didn’t provide us with any feedback, check your inbox. You’ll find a 50% off coupon for the tool that you tested.

Share

Code coverage comparison

By , May 9, 2013 3:19 pm

Recently we’ve had a flurry of customers wanting the ability to compare the code coverage of their application.

This sounded like a good idea so we asked these customers why they wanted to be able to compare different code coverage runs. The answers were varied:


  • I want to be able to take a known good baseline and compare it to a run with a regression in it.

  • I’ve inherited a legacy application and we want to understand the code paths for each given test.

  • I’ve inherited a legacy application and we know nothing about it. We’re testing it with appropriate input data and want to see which code executes.

For these customers being able to compare their code coverage runs is a big deal. Being able to compare your code coverage visually rather than just know that Session A is better than Session B allows you to quickly and easily identify exactly the area to focus on. It was such a compelling idea we’ve implemented code coverage comparison for all versions of Coverage Validator. This results in changes to the Session Manager and some new user interfaces.

Session Manager Dialog

Session Manager Dialog

The Session Manager has an additional Compare… button which will display the Session Comparison dialog.

Session Comparison Dialog

Session Compare Dialog

The Session Comparison allows you to choose two sessions and then view the comparisons. Clicking the Compare… button will display the Code Coverage Comparison viewer.

Code Coverage Comparison Viewer

Session Comparison Viewer

The code coverage comparison viewer is split into two parts, separated by a splitter control. The top part lists each file that is in each session being compared. The bottom part displays the source code coverage for the baseline session and for the comparison session. You can choose to view all code coverage data for these files or to only view the files that are different between baseline and comparison sessions.

You can compare different executables if that makes sense – for people testing related unit tests this can be a valid thing to do.

The display automatically selects the first file that contains code coverage differences and displays the baseline and comparison files in the bottom window at the location of the first difference in the file. As with our other code coverage displays the source code is highlighted to indicate which lines are visited/not visited and annotated so that you can determine line numbers and visit counts.

Share

UX Improvements for Coverage Validator

By , August 3, 2012 11:36 am

We recently released new versions of the Coverage Validator tools for all languages.

The main reason for this release was to make the tools more usable and make using them more satisfying. This work was inspired by some user experience research we commissioned with Think UI.

We’re so happy with these improvements we thought we’d share them so that you can learn from our improvements. We’re not finished with the Coverage Validator tools. This is just the start of changes to come.

I’m specifically going to talk about C++ Coverage Validator, but these improvements cut across all our Coverage Validator tools. Some of the improvements cut across all our development tools.

Summary Dashboard

The first thing a user of Coverage Validator sees is the summary dashboard.

The previous version of this dashboard was a grid with sparse use of graphics and lots of text. You had to read the text to understand what was happening with the code coverage for the test application. Additional comments and filter status information was displayed in right hand columns.

Coverage Validator old dashboard

The new version of this dashboard is split into two areas. The top area contains a dial for each metric reported. Each dial displays three items of information: Number of Items, Number of Items visited. How many items are 100% visited. This is done by means of an angular display for one value and a radial display for another value. A couple of the dials are pie charts.

The bottom area of the dashboard displays information that is relevant to the recorded session. Any value that can be viewed or edited is easily reachable via a hyperlink.

Coverage Validator new dashboard

The result of these changes are that the top area makes it easy to glance at a coverage report and instantly know which session has better coverage than another session. You don’t need to read the text to work it out. The bottom area draws attention to instrumentation failures (missing debug information, etc) and which filters are enabled etc. By exposing this information in this way more functionality of Coverage Validator is exposed to the user of the software.

Coverage Dials

We developed a custom control to display each coverage dial.

A coverage dial displays both the amount of data that has been visited, the amount of data that is unvisited and the amount that has been completely visited. For metrics that do not have a partial/complete status the dial just displays as a two part pie chart. An additional version displays data as a three part pie chart. This last version is used for displaying Unit Test results (success, failure, error).

Coverage dial directories Coverage dial unit tests

The difference between unvisited coverage and visited coverage is displayed using an angular value. Items that have been completely visited (100% coverage) are displayed using a radial value emanating from the centre of the dial. Additional information is displayed by a graded colour change between the 100% coverage area and the circumference of the circle to indicate the level of coverage in partially covered areas.

The coverage dial provides tooltips and hyperlinks for each section of the coverage dial.

Dashboard Status

The dashboard status area shows informational messages about the status of code instrumentation, a filter summary, unit test status and session merging status. Most items are either viewable or editable by clicking a hyperlink.

Dashboard status

To implement the hyperlink we created a custom control that supported hyperlinks with support for email hyper links, web hyper links and C++ callbacks. This provides maximal functionality. The hyperlinks are now used in many places in our tools – About box, evaluation feedback box, error report boxes, data export confirmation boxes, etc.

Coverage Scrollbar

We’ve also made high level overview data available on all the main displays (Coverage, Functions, Branches, Unit Tests, Files and Lines) so that you can get an overview of the coverage of each file/function/branch/etc without the need to scroll the view.

We thought of drawing the coverage data onto the scrollbar. Unfortunately this means that you would need an ownerdrawn scrollbar, but Windows does not provide such a thing. An option was to use a custom scroll bar implementation. But doing that would mean having to cater to every different type of Windows scrollbar implementation. We didn’t think that was a good idea. As such we’ve chosen to draw the coverage overview next to the scroll bar.

Coverage scroll bar

Editor Scrollbar

Similarly to the overview for each type of data we also provide a high level overview for the source code editor.

Editor code coverage

Directory Filter

Coverage Validator provides the ability to filter data on a variety of attributes. One of these is the directory in which a file is found. For example if the file was e:\om\c\svlWebAPI\webapi\ProductVersion\action.cpp the filter directory would be e:\om\c\svlWebAPI\webapi\ProductVersion\.

This is useful functionality but Coverage Validator allows you to filter on any directory. In complex software applications it’s quite possible that you would want to filter on a parent directory or a root directory. That would give the following directories for the example above.

e:
e:\om
e:\om\c
e:\om\c\svlWebAPI
e:\om\c\svlWebAPI\webapi
e:\om\c\svlWebAPI\webapi\ProductVersion

The solution to this problem is to create the context menu dynamically rather than use a preformed context menu stored in application resources. Additionally it is more likely that the current directory will be filtered rather than the parent, so it makes sense to reverse the order of the directories, going from leaf to root.

Coverage filter directory context menu

Instrumentation Preferences

The instrumentation preferences dialog is displayed to the user the first time that Coverage Validator starts. The purpose of this dialog is to configure the initial way coverage data is collected. This provides a range of performance levels from fast to slow and incomplete visit counts to complete visit counts. Both options affect the speed of execution of the software. Given that time to complete is an important cost for this is an option that should be chosen carefully.

Previous versions of the software displayed a wordy dialog containing two questions. Each question had two choices.

Coverage instrumentation preferences (old)

The new version of the instrumentation preference dialog has replaced the questions with a sliding scale. Two questions with two choices is effectively four combinations. The instrumentation level sliding scale has four values. As the slider value changes the text below the slider changes to provide a brief explanation of the instrumentation level chosen.

An additional benefit is that the previous version only implied the recommended values (we preset them). The new version also implies the recommended value (also preset) but also explicitly indicates the recommended instrumentation level.

This new design has less words, less visual clutter, is easier to use and presents less cognitive load on the user.

Coverage instrumentation preferences (new)

Export Confirmation

Coverage Validator provides options to export data to HTML and XML. A common desire after exporting is to view the exported data. Previous versions of Coverage Validator overlooked this desire, no doubt causing frustrating for some users. We’ve rectified this with a confirmation dialog displayed after exporting data. The options are to view the exported file or to view the contents of the folder holding the exported data. An option to never display the dialog again is also provided.

Coverage export confirmation

Debug Information

The previous version of the debug information dialog was displayed to the user at the end of instrumentation. After the user dismissed the dialog there was no way to view the data again. The dialog was simply a warning of which DLLs had no debug information. The purpose of this was to alert the user as to why a given DLL had no code coverage (debug information is required to provide code coverage).

The new version of debug information dialog is available from the dashboard. The new dialog displays all DLLs and their status. Status information indicates if debug information was found or not found and if Coverage Validator is interested in that DLL, and if not interested, why it is not interested. This allows you to easily determine if a DLL filter is causing the DLL to be ignored for code coverage.

Coverage Debug Information

When the dialog is displayed a Learn More… link is available. This presents a simple dialog providing some information about debug information for debug and release builds. We’ve used a modified static control on these dialogs to provide useful bold text in dialogs (something that you can’t do with plain MFC applications). It’s a small thing but improves structure of the dialog. This text was displayed as part of the previous debug information dialog. Removing this text to a separate dialog chunks the information making it more accessible.

Coverage Debug Information Learn More

There is more to be done with this part of the software but this is an improvement compared to previous versions.

Tips Dialog

Coverage Validator has always had a “Tip of the day” dialog. This is something of a holdover from earlier forms of application development. We’d never really paid much attention to it, to how it functioned, how it behaved and what it communicated.

Tip of the day dialog

We’re planning to completely overhaul this dialog but that is a longer term activity. As such in this revision we’ve just made some smaller scale changes that still have quite an impact.

Tips dialog

The first change is that the previous “Tip of the day” dialog was displayed at application startup but with the new version the “Tips” dialog is no longer displayed at application startup. The tips dialog is now displayed when you launch an application and are waiting for instrumentation to complete. This means you get displayed tips during “dead time” that you can’t really use effectively – you’re waiting for the tool. The tips dialog is still available from the Help menu as was the previous Tip of the day dialog.

The second change is that the new Tips dialog is modeless. The previous Tip of the day dialog was modal. This means you can leave the dialog displayed and move it out of the way. You don’t have to dismiss it.

We’ve done away with the icon and replaced it with a tip number so you know which tip you are viewing. Tips are no longer viewed sequentially (Next Tip) but in a random order. At first this seems like a crazy thing to do. But when you try it, it actually increases your engagement. You’re wondering how many tips there are and which one you’ll get next. Hipmunk was an inspiration for this – they do something similar when calculating your plane flights (I hadn’t seen this when I used Hipmunk but Roger from ThinkUI had seen it).

There is more to be done with this part of the software but this is a useful improvement until our completely reworked Tips dialog is ready for release.

Conclusion

All of the changes have been made to improve and simplify the way information is communicated to the user of Coverage Validator. Improved graphics displays, interactive dashboards, better data dialogs, hyperlinks and occasional use of bold text all improve the user experience.

We’re not finished improving Coverage Validator. These are just our initial round of improvements.

Share

Command line support for .Net services and ASP.Net

By , September 29, 2011 3:59 pm

Today we have released updated versions of our software tools for .Net, .Net Coverage Validator, .Net Memory Validator and .Net Performance Validator.

The updates to each tool add support for monitoring .Net services and ASP.Net processes when working with the tool from the command line. This allows you to, for example, control the tool from batch files and easy create large suites of tests that you can run and control from batch files or using other scripting technologies. This command line support builds on the already existing command line support in each tool for working with .Net desktop appplications. For information on existing command line options for each tool please see the help file that ships with each tool.

I’m going to outline the new command line options for each tool and provide some basic examples of how you might use these options with each tool. Each tool has been given the same basic options, with each tool getting some additional options specific to that tool.

.Net Coverage Validator

  • -serviceName fileName

    The -serviceName option is used to specify which service .Net Coverage Validator should monitor. The filename argument should be quoted if the filename contains spaces.

    -serviceName c:\path\myservice.exe
    -serviceName "c:\path with spaces\myservice.exe"
    
  • -urlToVisit url

    The -urlToVisit option specifies the web page that should be opened by the web browser when working with ASP.Net web servers.

    -urlToVisit http://localhost/myTestPage.aspx
    -urlToVisit "http://localhost/myTestPage.aspx"
    
  • -aspNetName filename

    The -aspNetName option is used to specify the ASP.Net process that is used by IIS. The filename argument should be quoted if the filename contains spaces.

    -aspNetName c:\windows\Microsoft.Net\Framework\v4.0.30319\aspnet_wp.exe
    -aspNetName "c:\windows\Microsoft.Net\Framework\v4.0.30319\aspnet_wp.exe"
    

    The value specified should be the value that you would specify if you used the .Net Coverage Validator interface to work with ASP.Net applications.

  • -webRoot directoryname

    The -webRoot option is used to specify the web root for this ASP.Net process. The directoryname argument should be quoted if the filename contains spaces.

    -webRoot c:\inetpub\wwwroot
    -webRoot "c:\inetpub\wwwroot"
    
  • -webBrowser filename

    The -webBrowser option is used to specify which web browser to use to open the web page if the user has chosen to specify a particular web browser. This option is used when the -aspNetWeb option specifies to use a specific web browser. The filename argument should be quoted if the filename contains spaces.

    -webBrowser c:\mozilla\firefox.exe
    -webBrowser "c:\program files\internet explorer\iexplore.exe"
    
  • -coverageDirectory directoryname

    The -coverageDirectory option specifies the directory .Net Coverage Validator will use to communicate with the GUI if security privileges do not allow named pipes and shared memory usage. The directoryname argument should be quoted if the filename contains spaces.

    -coverageDirectory c:\temp
    
  • -aspNetWeb default|user|specific

    The -aspNetWeb option is used to specify which web browser will be used to open the web page you have specified.

    The options are:

    default Use the default web browser.
    user The user will open a web browser themselves.
    specific Use a web browser identified by a filepath. Use in conjunction with -webBrowser option.
    -aspNetWeb default
    -aspNetWeb user
    -aspNetWeb specfic
    
  • -aspNetDelay integer

    The -aspNetDelay option is used to specify how long .Net Coverage Validator will wait for IIS to reset and restart itself. The delay is specified in milliseconds.

    -aspNetDelay 5000
    

Working with .Net services

This example shows how to use .Net Coverage Validator with a .Net service.

dnCoverageValidator.exe -serviceName E:\WindowsService\bin\Debug\WindowsService.exe 
-coverageDirectory c:\test\coverage -saveSession "c:\test results\testbed.dncvm" 
-hideUI
  • -serviceName E:\WindowsService\bin\Debug\WindowsService.exe

    This specifies the service to monitor. The service must be started after .Net Coverage Validator has been instructed to monitor the service.

  • -coverageDirectory c:\test\coverage

    This specifies the directory .Net Coverage Validator will use to communicate with the GUI if security privileges do not allow named pipes and shared memory usage.

  • -saveSession “c:\test results\testbed.dncvm”

    This specifies that after the application finishes the session should be saved in the file c:\test results\testbed.dncvm.

  • -hideUI

    This specifies that the user interface should not be displayed during the test. When the target service closes .Net Coverage Validator will close.

Working with ASP.Net

This example shows how to use .Net Coverage Validator with ASP.Net.

dnCoverageValidator.exe -urlToVisit http://localhost/testWebApp.aspx 
-aspNetName c:\windows\Microsoft.Net\Framework\v4.0.30319\aspnet_wp.exe 
-aspNetWeb default -aspNetDelay 5000 -webRoot c:\inetput\wwwroot 
-coverageDirectory c:\test\coverage -saveSession "c:\test results\testbed.dncvm" 
-hideUI
  • -urlToVisit http://localhost/testWebApp.aspx

    This specifies the web page that will be opened when working with ASP.Net

  • -aspNetName c:\windows\Microsoft.Net\Framework\v4.0.30319\aspnet_wp.exe

    This specifies the ASP.Net worker process that IIS will start.

  • -aspNetWeb default

    This specifies that the system defined web browser will be used to open the web page.

  • -aspNetDelay 5000

    This specifies a delay of 5 seconds to allow the IIS webserver to restart.

  • -webRoot c:\inetput\wwwroot

    This specifies the web root of the IIS web server.

  • -coverageDirectory c:\test\coverage

    This specifies the directory .Net Coverage Validator will use to communicate with the GUI if security privileges do not allow named pipes and shared memory usage.

  • -saveSession “c:\test results\testbed.dncvm”

    This specifies that after the application finishes the session should be saved in the file c:\test results\testbed.dncvm.

  • -hideUI

    This specifies that the user interface should not be displayed during the test. When the target service closes .Net Coverage Validator will close.

.Net Memory Validator

  • -collectData

    The -collectData option causes .Net Memory Validator to collect memory allocation events until the user chooses to disable data collection from the user interface.

    -collectData
    
  • -doNotCollectData

    The -doNotCollectData option causes .Net Memory Validator to ignore memory allocation events until the user chooses to enable data collection from the user interface.

    -doNotCollectData
    
  • -serviceName fileName

    The -serviceName option is used to specify which service .Net Memory Validator should monitor. The filename argument should be quoted if the filename contains spaces.

    -serviceName c:\path\myservice.exe
    -serviceName "c:\path with spaces\myservice.exe"
    
  • -urlToVisit url

    The -urlToVisit option specifies the web page that should be opened by the web browser when working with ASP.Net web servers.

    -urlToVisit http://localhost/myTestPage.aspx
    -urlToVisit "http://localhost/myTestPage.aspx"
    
  • -aspNetName filename

    The -aspNetName option is used to specify the ASP.Net process that is used by IIS. The filename argument should be quoted if the filename contains spaces.

    -aspNetName c:\windows\Microsoft.Net\Framework\v4.0.30319\aspnet_wp.exe
    -aspNetName "c:\windows\Microsoft.Net\Framework\v4.0.30319\aspnet_wp.exe"
    

    The value specified should be the value that you would specify if you used the .Net Memory Validator interface to work with ASP.Net applications.

  • -webRoot directoryname

    The -webRoot option is used to specify the web root for this ASP.Net process. The directoryname argument should be quoted if the filename contains spaces.

    -webRoot c:\inetpub\wwwroot
    -webRoot "c:\inetpub\wwwroot"
    
  • -webBrowser filename

    The -webBrowser option is used to specify which web browser to use to open the web page if the user has chosen to specify a particular web browser. This option is used when the -aspNetWeb option specifies to use a specific web browser. The filename argument should be quoted if the filename contains spaces.

    -webBrowser c:\mozilla\firefox.exe
    -webBrowser "c:\program files\internet explorer\iexplore.exe"
    
  • -aspNetWeb default|user|specific

    The -aspNetWeb option is used to specify which web browser will be used to open the web page you have specified.

    The options are:

    default Use the default web browser.
    user The user will open a web browser themselves.
    specific Use a web browser identified by a filepath. Use in conjunction with -webBrowser option.
    -aspNetWeb default
    -aspNetWeb user
    -aspNetWeb specfic
    
  • -aspNetDelay integer

    The -aspNetDelay option is used to specify how long .Net Memory Validator will wait for IIS to reset and restart itself. The delay is specified in milliseconds.

    -aspNetDelay 5000
    

Working with .Net services

This example shows how to use .Net Memory Validator with a .Net service.

dnMemoryValidator.exe -serviceName E:\WindowsService\bin\Debug\WindowsService.exe 
-saveSession "c:\test results\testbed.dnmvm" -hideUI
  • -serviceName E:\WindowsService\bin\Debug\WindowsService.exe

    This specifies the service to monitor. The service must be started after .Net Memory Validator has been instructed to monitor the service.

  • -saveSession “c:\test results\testbed.dnmvm”

    This specifies that after the application finishes the session should be saved in the file c:\test results\testbed.dnmvm.

  • -hideUI

    This specifies that the user interface should not be displayed during the test. When the target service closes .Net Memory Validator will close.

Working with ASP.Net

This example shows how to use .Net Memory Validator with ASP.Net.

dnMemoryValidator.exe -urlToVisit http://localhost/testWebApp.aspx 
-aspNetName c:\windows\Microsoft.Net\Framework\v4.0.30319\aspnet_wp.exe 
-aspNetWeb default -aspNetDelay 5000 -webRoot c:\inetput\wwwroot 
-saveSession "c:\test results\testbed.dnmvm" -hideUI
  • -urlToVisit http://localhost/testWebApp.aspx

    This specifies the web page that will be opened when working with ASP.Net

  • -aspNetName c:\windows\Microsoft.Net\Framework\v4.0.30319\aspnet_wp.exe

    This specifies the ASP.Net worker process that IIS will start.

  • -aspNetWeb default

    This specifies that the system defined web browser will be used to open the web page.

  • -aspNetDelay 5000

    This specifies a delay of 5 seconds to allow the IIS webserver to restart.

  • -webRoot c:\inetput\wwwroot

    This specifies the web root of the IIS web server.

  • -saveSession “c:\test results\testbed.dnmvm”

    This specifies that after the application finishes the session should be saved in the file c:\test results\testbed.dnmvm.

  • -hideUI

    This specifies that the user interface should not be displayed during the test. When the target service closes .Net Memory Validator will close.

.Net Performance Validator

  • -collectData

    The -collectData option causes .Net Performance Validator to collect memory allocation events until the user chooses to disable data collection from the user interface.

    -collectData
    
  • -doNotCollectData

    The -doNotCollectData option causes .Net Performance Validator to ignore memory allocation events until the user chooses to enable data collection from the user interface.

    -doNotCollectData
    
  • -collectFunctionTimes

    The -collectFunctionTimes option causes .Net Performance Validator to collect timing information for functions in the application/service/ASP.Net webserver.

    -collectFunctionTimes
    
  • -collectLineTimes
    The -collectLinesTimes option causes .Net Performance Validator to collect timing information for lines in the application/service/ASP.Net webserver.

    -collectLineTimes
    
  • -doNotCollectFunctionTimes
    The -doNotCollectFunctionTimes option causes .Net Performance Validator not to collect timing information for functions in the application/service/ASP.Net webserver.

    -doNotCollectFunctionTimes
    
  • -doNotCollectLineTimes
    The -doNotCollectLinesTimes option causes .Net Performance Validator not to collect timing information for lines in the application/service/ASP.Net webserver.

    -doNotCollectLineTimes
    
  • -serviceName fileName

    The -serviceName option is used to specify which service .Net Performance Validator should monitor. The filename argument should be quoted if the filename contains spaces.

    -serviceName c:\path\myservice.exe
    -serviceName "c:\path with spaces\myservice.exe"
    
  • -urlToVisit url

    The -urlToVisit option specifies the web page that should be opened by the web browser when working with ASP.Net web servers.

    -urlToVisit http://localhost/myTestPage.aspx
    -urlToVisit "http://localhost/myTestPage.aspx"
    
  • -aspNetName filename

    The -aspNetName option is used to specify the ASP.Net process that is used by IIS. The filename argument should be quoted if the filename contains spaces.

    -aspNetName c:\windows\Microsoft.Net\Framework\v4.0.30319\aspnet_wp.exe
    -aspNetName "c:\windows\Microsoft.Net\Framework\v4.0.30319\aspnet_wp.exe"
    

    The value specified should be the value that you would specify if you used the .Net Performance Validator interface to work with ASP.Net applications.

  • -webRoot directoryname

    The -webRoot option is used to specify the web root for this ASP.Net process. The directoryname argument should be quoted if the filename contains spaces.

    -webRoot c:\inetpub\wwwroot
    -webRoot "c:\inetpub\wwwroot"
    
  • -webBrowser filename

    The -webBrowser option is used to specify which web browser to use to open the web page if the user has chosen to specify a particular web browser. This option is used when the -aspNetWeb option specifies to use a specific web browser. The filename argument should be quoted if the filename contains spaces.

    -webBrowser c:\mozilla\firefox.exe
    -webBrowser "c:\program files\internet explorer\iexplore.exe"
    
  • -profilerDirectory directoryname

    The -profilerDirectory option specifies the directory .Net Performance Validator will use to communicate with the GUI if security privileges do not allow named pipes and shared memory usage. The directoryname argument should be quoted if the filename contains spaces.

    -profilerDirectory c:\temp
    
  • -aspNetWeb default|user|specific

    The -aspNetWeb option is used to specify which web browser will be used to open the web page you have specified.

    The options are:

    default Use the default web browser.
    user The user will open a web browser themselves.
    specific Use a web browser identified by a filepath. Use in conjunction with -webBrowser option.
    -aspNetWeb default
    -aspNetWeb user
    -aspNetWeb specfic
    
  • -aspNetDelay integer

    The -aspNetDelay option is used to specify how long .Net Performance Validator will wait for IIS to reset and restart itself. The delay is specified in milliseconds.

    -aspNetDelay 5000
    

Working with .Net services

This example shows how to use .Net Performance Validator with a .Net service.

dnPerformanceValidator.exe -serviceName E:\WindowsService\bin\Debug\WindowsService.exe 
-profilerDirectory c:\test\profiler -saveSession "c:\test results\testbed.dnpvm" 
-hideUI
  • -serviceName E:\WindowsService\bin\Debug\WindowsService.exe

    This specifies the service to monitor. The service must be started after .Net Performance Validator has been instructed to monitor the service.

  • -profilerDirectory c:\test\profiler

    This specifies the directory .Net Performance Validator will use to communicate with the GUI if security privileges do not allow named pipes and shared memory usage.

  • -saveSession “c:\test results\testbed.dnpvm”

    This specifies that after the application finishes the session should be saved in the file c:\test results\testbed.dnpvm.

  • -hideUI

    This specifies that the user interface should not be displayed during the test. When the target service closes .Net Performance Validator will close.

Working with ASP.Net

This example shows how to use .Net Performance Validator with ASP.Net.

dnPerformanceValidator.exe -urlToVisit http://localhost/testWebApp.aspx 
-aspNetName c:\windows\Microsoft.Net\Framework\v4.0.30319\aspnet_wp.exe 
-aspNetWeb default -aspNetDelay 5000 -webRoot c:\inetput\wwwroot 
-profilerDirectory c:\test\profiler -saveSession "c:\test results\testbed.dnpvm" 
-hideUI
  • -urlToVisit http://localhost/testWebApp.aspx

    This specifies the web page that will be opened when working with ASP.Net

  • -aspNetName c:\windows\Microsoft.Net\Framework\v4.0.30319\aspnet_wp.exe

    This specifies the ASP.Net worker process that IIS will start.

  • -aspNetWeb default

    This specifies that the system defined web browser will be used to open the web page.

  • -aspNetDelay 5000

    This specifies a delay of 5 seconds to allow the IIS webserver to restart.

  • -webRoot c:\inetput\wwwroot

    This specifies the web root of the IIS web server.

  • -profilerDirectory c:\test\profiler

    This specifies the directory .Net Performance Validator will use to communicate with the GUI if security privileges do not allow named pipes and shared memory usage.

  • -saveSession “c:\test results\testbed.dnpvm”

    This specifies that after the application finishes the session should be saved in the file c:\test results\testbed.dnpvm.

  • -hideUI

    This specifies that the user interface should not be displayed during the test. When the target service closes .Net Performance Validator will close.

Share

Doing good work can make you feel a bit stupid

By , July 19, 2010 6:08 pm

Doing good work can make you feel a bit stupid, well thats my mixed bag of feelings for this weekend. Here is why…

Last week was a rollercoaster of a week for software development at Software Verification.

Off by one, again?

First off we found a nasty off-by-one bug in our nifty memory mapped performance tools, specifically the Performance Validator. The off-by-one didn’t cause any crashes or errors or bad data or anything like that. But it did cause us to eat memory like nobodies business. But for various reasons it hadn’t been found as it didn’t trigger any of our tests.

Then along comes a customer with his huge monolithic executable which won’t profile properly. He had already thrown us a curve balled by supplying it as a mixed mode app – half native C++, half C#. That in itself causes problems with profiling – the native profiler has to identify and ignore any functions that are managed (.Net). He was pleased with that turnaround but then surprised we couldn’t handle his app, as we had handled previous (smaller) versions of his app. The main reason he was using our profiler is that he had tried others and they couldn’t handle his app – and now neither could we! Unacceptable – well that was my first thought – I was half resigned to the fact that maybe there wasn’t a bug and this was just a goliath of an app that couldn’t be profiled.

I spent a day adding logging to every place, no matter how insignificant, in our function tree mapping code. This code uses shared memory mapped space exclusively, so you can’t refer to other nodes by addresses as the address in one process won’t be valid in the other processes reading the data. We had previously reorganised this code to give us a significant improvement in handling large data volumes and thus were surprised at the failure presented to us. Then came a long series of tests, each which was very slow (the logging writes to files and its a large executable to process). The logging data was huge. Some of the log files were GBs in size. Its amazing what notepad can open if you give it a chance!

Finally about 10 hours in I found the first failure. Shortly after that I found the root cause. We were using one of our memory mapped APIs for double duty. And as such the second use was incorrect – it was multiplying our correctly specified size by a prefixed size offset by one. This behaviour is correct for a different usage. Main cause of the problem – in my opinion, incorrectly named methods. A quick edit later and we have two more sensibly named methods and a much improved memory performance. A few tests later and a lot of logging disabled and we are back to sensible performance with this huge customer application (and a happy customer).

So chalk up one “how the hell did that happen?” followed by feelings of elation and pleasure as we fixed it so quickly.
I’m always amazed by off-by-one bugs. It doesn’t seem to matter how experienced you are – it does seem that they do reappear from time to time. Maybe that is one of the persils of logic for you, or tiredness.

I guess there is a Ph.D. for someone in studying CVS commits, file modification timestamps and off-by-one bugs and trying to map them to time-of-day/tiredness attributes.

That did eat my Wednesday and Thursday evenings, but it was worth it.

Not to be outdone…

I had always thought .Net Coverage Validator was a bit slow. It was good in GUI interaction tests (which is part of what .Net Coverage Validator is about – realtime code coverage feedback to aid testing) but not good on long running loops (a qsort() for example). I wanted to fix that. So following on from the success with the C++ profiling I went exploring an idea that had been rattling around in my head for some time. The Expert .Net 2.0 IL Assembler book (Serge Lidin, Microsoft Press) was an invaluable aid in this.

What were we doing that was so slow?

The previous (pre V3.00) .Net Coverage Validator implementation calls a method for each line that is visited in a .Net assembly. That method is in a unique DLL and has a unique ID. We were tracing application execution and when we found our specific method we’d walk up the callstack one item and that would be the location of a coverage line visit. This technique works, but it has a high overhead:

  1. ICorProfiler / ICorProfiler2 callback overhead.
  2. Callstack walking overhead.

The result is that for GUI operations, code coverage is fast enough that you don’t notice any problems. But for long running functions, or loops code coverage is very slow.

This needed replacing.

What are we doing now that is so fast?

The new implementation doesn’t trace methods or call a method of our choosing. For each line we modify a counter. The location of the counter and modification of it are placed directly into the ilAsm code for each C#./VB.Net method. Our first implementation of .Net Coverage Validator could not do this because our shared memory mapped coverage data architecture did not allow it – the shared memory may have moved during the execution run and thus the embedded counter location would be invalidated. The new architecture allows the pointer to the counter to be fixed.

The implementation and testing for this only took a few hours. Amazing. I thought it was going to fraught with trouble, not having done much serious ilAsm for a year or so.

Result?

The new architecture is so lightweight that you barely notice the performance overhead. Less than 1%. Your code runs just about at full speed even with code coverage in place.

As you can imagine, getting that implemented, working and tested in less than a day is an incredible feeling. Especially compared to the previous performance level we had.

So why feel stupid?

Having acheived such good performance (and naturally feeling quite good about yourself for a while afterwards) its hard not to look back on the previous implementation and think “Why did we accept that?, We could have done so much better”. And that is where the feeling stupid comes in. You’ve got to be self critical to improve. Pat yourself on the back for the good times and reflect on the past to try to recognise where you could have done better so that you don’t make the same mistake in the future.

And now for our next trick…

The inspiration for our first .Net Coverage Validator implementation came from our Java Coverage Validator tool. Java opcodes don’t allow you to modify memory directly like .Net ilAsm does, so we had to use the method calling technique for Java. However given our success with .Net we’ve gone back to the JVMTI header files (which didn’t exist when we first wrote the Java tools) and have found there may be a way to improve things. We’ll be looking at that soon.

Share

Support for MinGW and QtCreator

By , December 4, 2009 5:01 pm

Everyone uses Visual Studio to write C and C++ software don’t they? Yes! you all chorus. Apart from some guys at the back who like to use gcc and g++. They use MinGW when working on Windows. And they may even use Emacs, or perish the thought, vi!

Up until now we haven’t been able to cater to the needs of gcc and g++ users. We’d get email every month asking when we were going to support MinGW or if we supported QtCreator. It was frustrating admitting we couldn’t support that environment. Even more so as the founders of Software Verification wrote large GIS applications using gcc and g++ back in the early 1990s.

During October we integrated support for MinGW and QtCreator into Coverage Validator, Memory Validator, Performance Validator and Thread Validator. Both COFF and STABS debug formats are supported, which provides some flexibility in how you choose to handle your symbols.

We’ll continue to add support for additional compilers to our tools as long as there is interest from you, the kind people that use our software tools.

Share

New .Net software tools

By , January 22, 2007 5:14 pm

We’ve spent the last few years creating our software tools for C++, Java and all the funky scripting languages that are now getting the recognition they deserve (Python, Ruby, JavaScript, etc).

During all this time we’ve have been asked if we have .Net versions of our tools, nearly always a question related to C#. We had to answer “No, but we will have .Net versions at some time in the future.”. That time has come. We now have .Net versions of Memory Validator and Performance Validator available as beta products.

Users of our popular Memory Validator software tool (for C++, Delphi…) will notice that the UI for .Net is quite different. This is because detecting memory leaks in garbage collected environments requires different approaches to collecting and analysing data. We have some innovative ideas in .Net Memory Validator, including the Allocations view, which provides a breakdown of objects allocated per function name; the Objects view, which provides a breakdown of objects allocated per object type; the Generations view, which provides an easy to read display of how many objects allocated per generation per object type. You can easily spot the trend graph of object usage and determine which objects are climbing or falling. A reference view allows you to view the object heap as a graph. The hotspots, memory, analysis, virtual and diagnostic tabs will be familiar to users of the original Memory Validator for C++.

.Net Memory Validator has the same user interface features you will find on our other Memory Validator products for other garbage collected languages (Java, JavaScript, Python, Ruby, etc) making ease of use when switching languages a doddle. As with all our products the .Net version, although different under the hood has the same team behind it. You are probably familiar with the case of a company creating a software tool for language X and then porting it support language Y, but the language Y version is lacking because it was a ground up rewrite, often by people unfamiliar with the language X version. This leads to incompatiblities, differnt UI behaviour, new bugs. That isn’t how we create our new language versions. Each version has its own code base, which allows for the radical under the hood changes to accomodate each language. But it has the same team and keeps the same user interface elements where applicable. So even if the code under the hood changes you get the same experience regardless of language.

This also allows our bug fixing to be improved. Many bugs from one version of a product apply to our other language versions. Thus if we find and fix a bug in say Java Memory Validator that bug fix can often be applied to JavaScript Memory Validator, Python Memory Validator, Ruby Memory Validator and .Net Memory Validator, etc. And our customers usually get the bug fix the very next day. This development method has been carried into the .Net line of software tools.

.Net Performance Validator continues this trend of the same user interface. If you know how to use any of our Performance Validator products (including the C++ version) you will know how to use .Net Performance Validator. Its that easy.

The callstack view provides a real time insight onto where a particular thread is running. Raw Statistics lets you inspect the raw data collected about performance and Statistics lets you inspect this data in a more orderly fashion. Relations provides the same information but allows you to view which function was called from which function. Call Tree provides the a call tree which you can expand and contract to view the performance data. Call Graph provides this information as a graph with each function listed as infrequently as possible. Call Graph is a very useful way to find an expensive function, then right click, choose goto Call Tree Node and the first node in the call tree that relates to the same node in the Call Graph expands with the node highlighted and source code displayed if available. Analysis allows complex queries onto the data and the diagnostic tab provides information about the instrumentation process.

Cheers for now, we have more .Net tools to work on.

Linux? MacOS X? Not right now, but some time in the future. Where have you read that before? 🙂

Share

Firefox 2.0 support

By , November 8, 2006 5:29 pm

We’ve just released the latest versions of all our JavaScript tools for flow tracing, code coverage, performance profiling and memory profiling. The latest versions support Firefox 2.0 as well as Firefox 1.5 and 1.0 and Flock 0.7.

Another improvement is that the JavaScript tools automatically prevent any installed debuggers from overriding the hooks required to make the JavaScript tool work. This should remove a regular source of confusion for those trying to use our tools when they have a JavaScript debugger installed.

Finally we’ve improved the JavaScript parsing and also the source code colouring.

Share

Panorama Theme by Themocracy