Monday, May 23, 2011
A custom JUnit Runner
Extending or customizing JUnit with a custom(ized) runner is relatively straight forward. JUnit will choose a custom runner whenever your test is annotated with the @RunWith annotation. The argument of the annotation specifies the desired runner. This runner can be any class extending Runner and providing a constructor that takes Class<?> descriptor of the annotated test as an argument.
Consider that you whish to execute your tests for a specific set of different locales. Your system allows manipulating the locale to be used via a global class LocaleHolder:
class LocaleHolder {
private static Locale locale;
public static Locale set(Locale locale) {
Locale old = LocaleHolder.locale;
LocaleHolder.locale = locale;
return old;
}
public static Locale get() {
return locale;
}
}
Now you want to implement your tests just once and have them executed for each possible Locale:
import com.example.junit.Locales;
@RunWith(Locales.class)
public class LocalesTest{
@Test
public void xyzIsCool(){
System.out.println("Locale in use: " + LocaleHolder.get());
}
}
Please note, that in the real word, you can achieve the same by simply using a parameterized JUnit test, but the task has a perfect scope for an example.
In the first step, we have to implement the custom “Locales” runner. As this runner will execute the same tests multiple times and therefore will be a node in the test runners gui, it should inherit from ParentRunner. Suite, which extends ParentRunner, is even better suited, as it implements a complete runner that can be used out of the box:
public class Locales extends Suite {
private static final Iterable<Locale> localesToUse =
asList(Locale.FRENCH, Locale.GERMAN, Locale.ENGLISH);
public Locales(Class<?> klass) throws InitializationError {
super(klass, extracAndCreateRunners(klass));
}
private static List<Runner>
extracAndCreateRunners(Class<?> klass) throws InitializationError {
List<Runner> runners = new ArrayList<Runner>();
for(Locale locale : localesToUse){
runners.add(new LocalesRunner(locale, klass));
}
return runners;
}
}
Basically, the LocalesRunner itself should set the correct locale before each test and execute the test in the same fashion as JUnit would do. The simplest way to get new functionality of the ground, is to derive from BlockJUnit4ClassRunner, which is the default test runner in JUnit . The best way to extend or augment standard JUnit behavior programmatically, is to inject your functionality in the Statement chain that the base class runner builds. For more information about extending statements, refer to the various JUnit rules examples, as they use this mechanism, too.
class LocalesRunner extends BlockJUnit4ClassRunner {
private final Locale locale;
LocalesRunner(Locale locale, Class<?> klass) throws InitializationError {
super(klass);
this.locale = locale;
}
@Override
protected Statement methodBlock(final FrameworkMethod method) {
return new Statement() {
@Override
public void evaluate() throws Throwable {
Locale oldLocale = LocaleHolder.set(locale);
try {
LocalesRunner.super.methodBlock(method).evaluate();
} finally {
LocaleHolder.set(oldLocale);
}
} };
}
@Override// The name of the test class
protected String getName() {
return String.format("%s [%s]", super.getName(), locale);
}
@Override// The name of the test method
protected String testName(final FrameworkMethod method) {
return String.format("%s [%s]", method.getName(), locale);
}
}
Now we are almost done (heard that one before?). The example should compile and the Test at the beginning of this post should get executed for each locale in the list. We also extended the test name output, so that one can see what test was executed for which locale.
A last interesting challenge was to think about how to write unit test for a custom test runner. In a lot of cases, it was sufficient to provide a simple test case like in this example. In other cases I needed to develop the runners incrementally using TDD, so I wanted smaller, specific unit tests. It turned out, that it is pretty easy to run JUnit for my test runner test cases programmatically:
private Result runTest(Class<?> testClass) {
JUnitCore core = new JUnitCore();
return core.run(testClass);
}
Writing such custom runners is obviously not difficult and can be done for specific problems in any project. It is done fast an helps you to provide more convenient ways to write test according to your projects needs.
Saturday, May 21, 2011
Reuse and composition of Unit Tests
From time to time I want to be able to reuse already finished unit test. e.g. when I do a new implementation of an interface. In most cases, I did write an abstract base fixture, that I extend as needed:
public abstract class IterableFixture<T> {
protected abstract Iterable<T> create();
@Test
public void iterationDoesSomething(){
assertThat(…);
}
}
Whenever I need to write an Iterable<T> now, I can reuse the generic test like this:
public class MyIterableTest extends IterableFixture<Integer> {
@Override
protected Iterable<Integer> create(){
return new MyIterable<Integer>(…);
}
@Test
public void otherTest(){
assertThat(…);
}
}
Over the time, multiple base fixtures emerged, e.g. for equals/hashcode implementations, iteration, collections, serializing and so on. When new classes needed multiple of these tests, I was forced to implement multiple test classes, e.g. MyFooIsSerializableTest, MyFooImplementsIterableTest, MyFooEqualsTest,… Although this was simple and practical, I never really liked that I have too check for multiple green bubbles inside my IDE’s test runner for one class under test. I also did not like that I could not see in one spot what features my classes are implementing. This was extremely true for collection implementations, as they can be un/modifiable, im/mutable and fixed/variable-size. Not to speak about allowing null or similar variants.
What I really wanted, was to run my single test for my class and have each feature nicely listed, in a similar manner as JUnit Suites do:
Just using a Suite did not help much, because I still had to extend my base fixtures. Additionally I had to add them to the Suites parameter list.
One of the reason that it was so clumsy to work with this approach are the plethora of derived classes - one for each feature that was implemented. So I decided, that using A factory that gets injected into the test classes constructor will simplify the approach. Then I looked at JUnits Parameterized runner, as it has the similar task to instantiate test fixture with constructor argument. The JUnit extension mechanism for custom runners showed to be very straight forward, so I decided to build my own Features Runner.
While thinking about how the using this runner I tried different variant on where to place which annotation. My requirements were:
I have spiked several versions. Here is an example of the final one. A class testing a collection for the features of being iterable and unmodifiable.
@RunWith(Features.class)
public class ArrayAdapterTestSuite {
@Feature(Unmodifiable.class)
public static Unmodifiable.Factory<Integer> unmodifiableFeature() {
return new Unmodifiable.Factory<Integer>() {
@Override
public Collection<Integer> createCollection() {
return asList(0, 1, 2);
}
@Override
public Integer createUniqueItem(int id) {
return id;
}
};
}
@Feature(Iterable.class)
public static Iterable.Factory<Integer> iterableFeature() {
return new Iterable.Factory<Integer>() {
@Override
public java.lang.Iterable<Integer> createIterable() {
return asList(0, 1, 2);
}
};
}
}
The feature fixture is implemented like this:
public class Unmodifiable<T> implements FeatureFixture {
private final Factory<T> factory;
public Unmodifiable(Factory<T> factory) {
this.factory = factory;
}
public interface Factory<T> {
Collection<T> createCollection();
T createUniqueItem(int id);
}
@Test(expected = UnsupportedOperationException.class)
public void addIsUnsupported() {
Collection<T> unmodifiable = factory.createCollection();
unmodifiable.add(factory.createUniqueItem(42));
}
}
The feature test is self contained. Only a factory needs to be supplied. The features are assembled by tagging a public static method creating a factory or a public static field with a @Feature annotations. The runner will ensure that the types of the factory and the annotated feature do match. This is a runtime check, as there is no way to achieve this at compile time because of java’s type erasure.
The Features Suite runner will get executed because the @RunsWith annotation on top of your test class. JUnit will instantiate it and pass it the Class<?> descriptor of your test. It will get inspected in the same way JUnit does it by using reflection. The gained information will be used to construct explicit FeatureRunner with the feature test Class<?> descriptor and the extracted factory.
Because the FeatureRunner would execute a group of test cases, I decided to use Suite as a base class for the implementation:
public class Features extends Suite {
public Features(Class<?> klass) throws InitializationError {
super(klass, extractAndCreateRunners(klass));
}
private static List<Runner> extractAndCreateRunners(Class<?> klass)
throws InitializationError {
List<Runner> runners = new ArrayList<Runner>();
for (FeatureAccessor field : extractFieldsWithTest(klass)) {
Class<? extends FeatureFixture> test = field.getFeature();
runners.add(new FeatureRunner(test, field.getFactory()));
}
addSuiteIfItContainsTests(klass, runners);
return runners;
}
private static void addSuiteIfItContainsTests(Class<?> klass,
List<Runner> runners) {
try {
runners.add(new BlockJUnit4ClassRunner(klass));
} catch (InitializationError e) {// do nothing, no tests
}
}
private static abstract class FeatureAccessor<TField …> {
private final TField field;
static <TField extends …> boolean isValid(TField field) {
return Modifier.isPublic(field.getModifiers())
&& Modifier.isStatic(field.getModifiers())
&& field.isAnnotationPresent(Feature.class);
}
static <TField …> FeatureAccessor<?> createFrom(final TField field) {
…
}
Class<? extends FeatureFixture> getFeature() {
return field.getAnnotation(Feature.class).value();
}
Object getFactory() {
…
return this.field.invoke(null);
}
}
private static List<FeatureAccessor> extractFieldsWithTest(Class<?> klass) {
List<FeatureAccessor> factoryFieldsWithProperty =
new ArrayList<FeatureAccessor>();
for (Field field : klass.getFields()) {
if (!FeatureAccessor.isValid(field)) {
continue;
}
factoryFieldsWithProperty.add(FeatureAccessor.createFrom(field));
}
…
return factoryFieldsWithProperty;
}
}
The FeatureRunner simply extends JUnits default runner in passing the factory instance into the test constructor:
class FeatureRunner extends BlockJUnit4ClassRunner {
private final Object factory;
FeatureRunner(Class<?> klass, Object factory)
throws InitializationError {
super(klass);
if(factory == null){
throw new InitializationError(…);
}
this.factory = factory;
}
protected Object createTest() throws Exception {
return getTestClass().getOnlyConstructor().newInstance(factory);
}
…
}
Changing my base class fixtures to FeatureFixtures was extremely simple and composing standard unit test bits really is fun now! If you are interested in this solution, the source code is on google-code. I hope it will help you as much as it does help me!
Wednesday, January 6, 2010
Creating a custom (multi module) Maven archetype
In the last couple of days I was working on creating a custom Maven archetype for a multi module project. I used the Maven archetype plug in version 2.0-alpha-4.
What I wanted to do
When I write a component or module, I usually use a maven project structure consisting of (at least) two sub projects. One contains the abstract API of the component, the other the concrete implementation:
component
|--pom.xml
|--component.api
| `--pom.xml
`--component.imp
`--pom.xml
The implementation project references the api project using maven’s property mechanism to keep version and group ID in synch.
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>component.api</artifactId>
<version>${project.version}</version>
</dependency>
Both sub projects inherit from the top level pom:
<parent>
<groupId>my.groupId</groupId>
<artifactId>component</artifactId>
<version>1.0-SNAPSHOT</version>
</parent>
And the top level pom specifies the api and imp project as modules:
<modules>
<module>component.api</module>
<module>component.imp</module>
</modules>
So I basically wanted to have a custom archetype that generates this kind of project structure for me. As I have stumbled over various issues, I have collected all of my experiences during this task in this blog entry.
Creating from an existing project
Using archetype:create-from-project, you can create a working archetype from an existing project. Unfortunately it did not work for my existing projects out of the box. However, after building an extremely simplified example project, that didn’t have a parent element and not so many (actually none) properties in the pom, it worked. The plugin generates a basic project structure for your archetype project, all necessary files and extracts properties like version, groupId and name. The generated archetype is absolutely sufficient, when your archetype should just generate a skeleton project that contains all necessary dependencies. If you want to provide additional things like correctly packaged boilerplate code, you have to spend some extra effort. But the generated archetype is still a valuable starting point then.
The archetype project layout
The project layout for a maven archetype project looks like this:
archetype-project
|-- pom.xml
`-- src
`-- main
`-- resources
|-- META-INF
| `—maven
| |-- archetype.xml
| `-- archetype-metadata.xml
`-- archetype-resources
|-- pom.xml
`-- ...
The top level archetype-project pom
The top level pom declares the standard properties (groupId, artifatcId,..) for the archetype project and pulls in the desired version of the archetype plugin:
<?xml version="1.0" encoding="UTF-8"?>
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>net.amutech.archtypes</groupId>
<artifactId>component-archetype-simple</artifactId>
<version>1.0</version>
<packaging>maven-archetype</packaging>
<name>component-archetype-simple</name>
<build>
<extensions>
<extension>
<groupId>org.apache.maven.archetype</groupId>
<artifactId>archetype-packaging</artifactId>
<version>2.0-alpha-4</version>
</extension>
</extensions>
<plugins>
<plugin>
<artifactId>maven-archetype-plugin</artifactId>
<version>2.0-alpha-4</version>
<extensions>true</extensions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
<configuration>
<encoding>UTF-8</encoding>
</configuration>
</plugin>
</plugins>
</build>
</project>
The archetype-resources folder
This folders contains all files and templates needed to create the new project from the archetype. Basically it represents the projects future structure. For a single project it might look like this:
archetype-resources
|-- pom.xml
`-- src
|-- main
| `-- java
| `-- SomeClass.java
`-- test
`-- java
`—SomeClassTest.java
For a multi module project could look like this:
archetype-resources
|-- pom.xml
|-- module1
| `-- src
| |-- main
| | `-- java
| | `-- AClass1.java
| `-- test
| `-- java
| `-- AClass1Test.java
|
`-- module2
`-- src
|-- main
| `-- java
| `-- AClass2.java
`-- test
`-- java
`-- AClass2Test.java
The archetype-xml archetype descriptor
This files specifies all files that are are used for the project creation by the archetype. The basic form of the xml looks like this:
<archetype>
<id>archetype-id</id>
<sources>
<source>src/main/..</source>
</sources>
<testSources>
<source>src/test/..</source>
</testSources>
<resources>
<resource>src/test/..</resource>
</resources>
<testSources>
<testResource>src/test/resources/..</testResource>
</testSources>
<siteResources>
<siteResource>src/site/..</siteResource>
</siteResources>
</archetype>
The documentation states that these tags represent the different section of the project. This is correct for a single module, but not exactly for a multi module archetypes. For instance, a source file inside of module2 has to be specified like this:
<resources>
<resource>module2/src/main/java/AClass1.java</resource>
</resources>
This is because the top level folder is named “module2”, not “src”. Trying to declare it as a source file will lead to an error when executing the archetype.
The archetype.metadata.xml file
This file describes what will be done with the files specified in the archetype descriptor. The documentation on the plug in site is a good reference material. You basically define file sets, specify where they will be located and if they will be filtered in several ways.
Property replacement in files
Maven seem to use velocity as a template engine internally, as the property replacement syntax look exactly like that. In any file, a statement like ${thePropertyName} will be replaced with the corresponding value. For instance: If your archetypes groupId is “my.groupId”, all occurrences of ${groupId} will be replaced with this value. This applies for all files that are specified to be filtered inside the archetype-metadata.xml file:
<fileSet filtered="true" packaged="true" encoding="UTF-8">
<directory>src/main/java</directory>
<includes>
<include>**/*.java</include>
</includes>
</fileSet>
Property replacement in files and folder names
By using __propertyName__ inside or a file or folder name, the corresponding property value will be inserted there. For instance, a file named __groupId__-specialFile.xml will be renamed to my.groupId-specialFile.xml when the archetype is executed.
Extra Properties for multi module archetypes
There are some extra properties for multi module archetypes:
rootArtifactId | The artifactId of the that the root project will have. this is the artifactId that is specified when the archetype is used to create anew project |
parentArtifact | Inside a sub project, this is the artifact id of the parents project |
Escaping
Velocity uses the "$” sign as an identifier. Some artifacts that the archetype needs to generate are velocity templates as well, e.g. pom files using ${project.artifactId} properties. They can be escaped by placing a #set( $symbol_dollar = '$' ) at the beginning of a file:
#set( $symbol_dollar = '$' )
<project>
<name>${symbol_dollar}{project.artifactId}</name>
</project>
In general, you can escape any special character like this.
Correct packaging of java classes
In my archetype, I wanted to provide some boilerplate and sample Java code to demonstrate the intended packaging and naming convention of my components. The property to use inside a file for the correct package to be inserted is ${package}. That is intuitive! However, if I want maven to create the corresponding folder structure as well,you have to specify this on the file set inside the archetype-metadata.xml:
<fileSet filtered=true packaged="true" encoding="UTF-8"><directory>src/main/java</directory>
<includes>
<include>**/*.java</include>
</includes>
</fileSet>
I also wanted some specific sub packages to be created as well. For instance, the imp, project has a package ${package}.imp. In order to have the folder structure generated correctly, I had to place the template java files in an imp sub directory, as the package statement just uses the package property and does not scan the filtered source files.
|
`—component.imp
`-- src
`—main
`--java
`—imp
`-- AClass2.java