Home | Docs | Issue Tracker | FAQ | Download | |
Author: | Frank Warmerdam |
---|---|
Contact: | warmerdam at pobox.com |
Revision: | $Revision$ |
Date: | $Date$ |
Last Updated: | 2011/3/20 |
Table of Contents
The msautotest is a suite of test maps, data files, expected result images, and test scripts intended to make it easy to run an a set of automated regression tests on MapServer.
The autotest is available from SVN. On Unix it could be fetched something like:
% svn checkout http://svn.osgeo.org/mapserver/trunk/msautotest
This would create an msautotest subdirectory whereever you are. I normally put the autotest within my MapServer directory.
The autotest requires python (but not python MapScript), so if you don’t have python on your system - get and install it. More information on python is available at http://www.python.org. Most Linux system have some version already installed.
The autotest also requires that the executables built with MapServer, notably shp2img, legend, mapserv and scalebar, are available in the path. I generally accomplish this by adding the MapServer build directory to my path.
csh:
% setenv PATH $HOME/mapserver:$PATH
bash/sh:
% PATH=$HOME/mapserver:$PATH
Verify that you can run stuff by typing ‘shp2img -v’ in the autotest directory:
warmerda@gdal2200[152]% shp2img -v
MapServer version 3.7 (development) OUTPUT=PNG OUTPUT=JPEG OUTPUT=WBMP
SUPPORTS=PROJ SUPPORTS=TTF SUPPORTS=WMS_SERVER SUPPORTS=GD2_RGB
INPUT=TIFF INPUT=EPPL7 INPUT=JPEG INPUT=OGR INPUT=GDAL INPUT=SHAPEFILE
Now you are ready to run the tests. The tests are subdivided into categories, currently just “gdal”, “misc”, and “wxs” each as a subdirectory. To run the “gdal” tests cd into the gdal directory and run the run_test.py script.
Unix:
./run_test.py
Windows:
python.exe run_test.py
The results in the gdal directory might look something like this:
warmerda@gdal2200[164]% run_test.py
version = MapServer version 6.0.0-beta2 OUTPUT=GIF OUTPUT=PNG OUTPUT=JPEG SUPPORTS=PROJ SUPPORTS=AGG SUPPORTS=FREETYPE SUPPORTS=ICONV SUPPORTS=WMS_SERVER SUPPORTS=WMS_CLIENT SUPPORTS=WFS_SERVER SUPPORTS=WCS_SERVER SUPPORTS=SOS_SERVER SUPPORTS=THREADS SUPPORTS=GEOS INPUT=POSTGIS INPUT=OGR INPUT=GDAL INPUT=SHAPEFILE
Processing: grayalpha.map
results match.
Processing: class16.map
result images match, though files differ.
Processing: nonsquare_multiraw.map
results match.
Processing: bilinear_float.map
results match.
Processing: processing_scale_auto.map
results match.
Processing: grayalpha_plug.map
result images match, though files differ.
Processing: processing_bands.map
results match.
...
Processing: 256color_overdose_cmt.map
results match.
Test done (100.00% success):
0 tested skipped
69 tests succeeded
0 tests failed
0 test results initialized
In general you are hoping to see that no tests failed.
Because most msautotest tests are comparing generated images to expected images, the tests are very sensitive to subtle rounding differences on different systems, and subtle rendering changes in libraries like freetype gd, and agg. So it is quite common to see some failures.
These failures then need to be reviewed manually to see if the differences are acceptable and just indicating differences in rounding/rendering or whether they are “real” bugs. This is normally accomplished by visually comparing files in the “result” directory with the corrresponding file in the “expected” directory. It is best if this can be done in an application that allows images to be layers, and toggled on and off to visually highlight what is changing. OpenEV can be used for this.
The msautotest suite was initially developed by Frank Warmerdam (warmerdam at pobox.com), who can be contacted with questions it.
The msautotest suite is organized as a series of .map files. The python scripts basically scan the directory in which they are run for files ending in .map. They are then “run” with the result dumped into a file in the result directory. A binary comparison is then done to the corresponding file in the expected directory and differences are reported. The general principles for the test suite are that:
For shp2img tests The output files generated by processing a map is put in the file results/<mapfilebasename>.png (regardless of whether it is PNG or not). So when gdal/256_overlay_res.map is processed, the output file is written to gdal/results/256_overlay_res.png. This is compared to gdal/expected/256_overlay_res.png. If they differ the test fails, and the “wrong” result file is left behind for investigation. If they match the result file is deleted. If there is no corresponding expected file the results file is moved to the expected directory (and reported as an “initialized” test) ready to be committed to CVS.
For tests using RUN_PARMS, the output filename is specified in the RUN_PARMS statement, but otherwise the comparisons are done similarly.
The initial comparison of files is done as a binary file comparison. If that fails, for image files, there is an attempt to compare the image checksums using the GDAL Python bindings. If the GDAL Python bindings are not available this step is quietly skipped.
If you install the PerceptualDiff program (http://pdiff.sourceforge.net/) and it is in the path, then the autotest will attempt to use it as a last fallback when comparing images. If images are found to be “perceptually” the same the test will pass with the message “result images perceptually match, though files differ.” This can dramatically cut down the number of apparent failures that on close inspection are for all intents and purposes identical. Building PerceptualDiff is a bit of a hassle and it will miss some significant differences so it’s use is of mixed value.
For non-image results, such as xml and html output, the special image comparisons are skipped.
Because MapServer can be built with many possible extensions, such as support for OGR, GDAL, and PROJ.4, it is desirable to have the testsuite automatically detect which tests should be run based on the configuratio of MapServer. This is accomplished by capturing the version output of “shp2img -v” and using the various keys in that to decide which tests can be run. A directory can have a file called “all_require.txt” with a “REQUIRES:” line indicating components required for all tests in the directory. If any of these requirements are not met, no tests at all will be run in this directory. For instance, the gdal/all_require.txt lists:
REQUIRES: INPUT=GDAL OUTPUT=PNG
In addition, individual .map files can have additional requirements expressed as a REQUIRES: comment in the mapfile. If the requirements are not met the map will be skipped (and listed in the summary as a skipped test). For example gdal/256_overlay_res.map has the following line to indicate it requires projection support (in addition to the INPUT=GDAL and OUTPUT=PNG required by all files in the directory):
# REQUIRES: SUPPORTS=PROJ
There is also a RUN_PARMS keyword that may be placed in map files to override a bunch of behaviour. The default behaviour is to process map files with shp2img, but other programs such as mapserv or scalebar can be requested, and various commandline arguments altered as well as the name of the output file. For instance, the following line in misc/tr_scalebar.map indicates that the output file should be called tr_scalebar.png, the commandline should look like “[SCALEBAR] [MAPFILE] [RESULT]” instead of the default “[SHP2IMG] -m [MAPFILE] -o [RESULT]”.
RUN_PARMS: tr_scalebar.png [SCALEBAR] [MAPFILE] [RESULT]
For testing things as they would work from an HTTP request, use the RUN_PARMS with the program [MAPSERV] and the QUERY_STRING argument, with results redirected to a file.
# RUN_PARMS: wcs_cap.xml [MAPSERV] QUERY_STRING='map=[MAPFILE]&SERVICE=WCS&VERSION=1.0.0&REQUEST=GetCapabilities' > [RESULT]
For web services that generate images that would normally be prefixed with the Content-type header, use [RESULT_NOMIME] to instruct the test harnass to script off any http headers before doing the comparison. This is particularly valuable for image results so the files can be compared using special image comparisons.
# Generate simple PNG.
# RUN_PARMS: wcs_simple.png [MAPSERV] QUERY_STRING='map=[MAPFILE]&SERVICE=WCS&VERSION=1.0.0&REQUEST=GetCoverage&WIDTH=120&HEIGHT=90&FORMAT=GDPNG&BBOX=0,0,400,300&COVERAGE=grey&CRS=EPSG:32611' > [RESULT_DEMIME]
As mentioned above the [RESULT_DEMIME] directive can be used for image file output from web services (ie. WMS GetMap requests).
For text, XML and HTML output it can also be helpful to apply other pre-processing to the output file to make comparisons easlier. The [RESULT_DEVERSION] directive in the RUN_PARMS will apply several translations to the output file including:
In some cases it is also helpful to strip out lines matching a particular pattern. The [STRIP:xxx] directive drops all lines containing the indicated substring. Multiple [STRIP:xxx] directives may be included in the command string if desired. For instance, the error reports from the runtime substitution validation test (misc/runtime_sub.map) produces error messages with an absolute path in them which changes for each person. The following directive will drop any text lines in the result that contain the string ShapefileOpen, which will be error messages in this case:
# RUN_PARMS: runtime_sub_test001.txt [MAPSERV] QUERY_STRING='map=[MAPFILE]
&mode=map&layer=layer1&name1=bdry_counpy2' > [RESULT_DEVERSION] [STRIP:ShapefileOpen]
When running the test suite, it is common for some tests to fail depending on vagaries of floating point on the platform, or harmless changes in MapServer. To identify these compare the results in result with the file in expected and determine what the differences are. If there is just a slight shift in text or other features it is likely due to floating point differences on different platforms. These can be ignored. If something has gone seriously wrong, then track down the problem!
It is also prudent to avoid using output image formats that are platform specific. For instance, if you produce TIFF it will generate big endian on big endian systems and therefore be different at the binary level from what was expected. PNG should be pretty safe.
% svn add mynewtest.map expected/mynewtest.png
% svn commit -m "new" mynewtest.map expected/mynewtest.png
You’re done!