Sunday, November 25, 2012

Maven World


In software development process, one of the most painful part is the initial project configuration and generating stable builds. This cannot be more true when the project relies on other dependent services and custom API's. Managing the builds along with their version info relevant to the source management system would be difficult. The problem becomes a nutcracker when multiple teams working on different components of the project simultaneously, update their code and have their build tested locally with no distribution management and central build standardization in place. It certainly becomes frustrating when after working with the API for a while, you find out its being modified in the mean time. Also generating a project build itself has become a complex process with increasing number of dependent components, external resources, configuration and environments etc. Although build tools such as ANT does handles the problem of build automation, it fails to address the issues surrounding the build distribution, versioning and standardization. Although ANT tasks do name the build version and copies it to the build location which can later be maintained by the source control itself, it does fall short to solve some of the critical problems. It fails to have a any formal conventions such as standard directory structure and does not have lifecycle specified for the tasks which execute procedurally.
   Maven since its emergence has been targeted to encounter all the above issues and provide a reliable project management tool along with build management capabilities. It provides a reliable dependency management system with project lifecycle management and drives on project object model. Maven performs all its operations including common build tasks using its plugins shared and maintained in a central repository. The plugins use the data provided in the pom as well as configuration parameters to carry out the task. Maven maintains a model of the project along with its dependencies and plugins required from the repository thus promoting reusability of build logic. The project model defines a unique set of coordinates consisting of a group identifier, an artifact identifier, and a version, providing the coordinates to declare dependencies. These co-ordinates are used to create repositories of maven artifacts and with the help of tools such as Nexus and Artifactory can be accessed remotely as well. These remote repositories usually mirror the maven central repository.
    Installing maven is quick and easy, involving downloading of maven binary files, extracting in a directory, adding M2_HOME system variable with maven executable path, appending maven executable path to the PATH system variable and add MAVEN_OPTS system variable providing JVM execution parameters (-Xmx1024m -Xms512m -XX:MaxPermSize=512m). Normally the maven settings file is setup in the user profile directory (under ~/.m2/repository/ folder) or in conf directory of maven home/installation path. The settings.xml mainly provides the urls for the remote repository. The location of the setting.xml can be overridden in the command line using the --settings or --s option as below:
mvn --settings ~/.m2/settings-customer1.xml clean install
mvn –s ~/.m2/settings-customer1.xml clean install

A maven project has a standard structure inside the ${basedir} directory which represents the directory containing pom.xml. The source code lies in the ${basedir}/src/main/java, resources in the ${basedir}/src/main/resources, tests in ${basedir}/src/test and byte code in ${basedir}/target directory.


Project Object Model (POM)
It contains the groupId, artifactId, packaging and version info which form the co-ordinates to uniquely identify the project and define relationships between other projects through dependencies, parents and prerequisites. The name and url are descriptive elements useful for maven site generation. The dependency element defines the co-ordinates for dependent project/plugins and provides the scope attribute to limit the transitivity of the dependency. Following are 6 scopes available:
  • compile: It is default scope and makes dependencies available in all classpaths of a project and propagated over to dependent projects.
  • provided:  It indicates that the JDK or a container is expected to provide the dependency at runtime.
  • runtime: It indicates that the dependency is not required for compilation, but is for execution.
  • test: It indicates that the dependency is only available for test compilation and execution phases.
  • system: It is similar to provided except needs to provide the JAR which contains dependency explicitly
  • import: It indicates that the specified POM should be replaced with the dependencies in that POM's <dependencyManagement> section.
    The dependency section could also have a classifier attribute which distinguishes between the artifacts built from the same POM but differ in the either java version or build type (jar/ear etc) . Also if case some of the transitive dependencies referred by the dependency are not needed, they can be excluded simply using the exclusions section targeted at a specific groupId and artifactId..
Dependencies can be specified directly in the POM using the dependencies element or in the dependencyManagement section. The Dependency management section allows to consolidate and centralize the management of dependency versions without adding dependencies which are inherited by all children, useful especially for a set of projects with a common parent.

Maven executes against the effective POM which is the combination of the project's POM, all parent POMs, maven super-POM and user defined settings and active profiles. All the maven projects ultimately extend the super-POM, which defines a set of sensible default configuration settings. The version dependencies in pom can be overridden from super-pom to settings.xml to parent POM to child POMs. Following command is used to see the such effective POM:
  mvn help:effective-pom

Maven carries out all its operations using plugins as mentioned before. A maven plugin is considered as a collection of one or more goals. A goal on the other hand is a unit of work, a specific task executed either as standalone or together with other goals (executed as pluginId:goalId). For example a compile goal has a standalone compile plugin to compile source code, and surefire plugin contains goals for executing tests and generating reports. Goals can be configured via configuration properties to customize the behavior and also define parameters with default values. Goals are executed in context of the POM which defines the project. Besides executing tasks as goals, maven allows to define a lifecycle phase. A phase is a single step in the maven build lifecycle which has an ordered sequence of phases in order to build the project. Plugin goals can be attached to the lifecycle phase, which can have zero or multiple goals. When maven moves through the phases in a lifecycle, it executes the goals attached to each particular phase. Maven can support a number of custom lifecycles, but the default lifecycle is predominantly used. Execution of a phase will first execute all preceding phases in order, ending with the phase specified on the command line.
    A maven repository constitutes a collection of project artifacts stored in a directory structure which matches closely with the maven co-ordinates. Maven looks up for the artifact in the local repository first and if not found tries to load it from the remote repository. Maven downloads all the  POM files for dependency with the artifacts on order to support transitive dependencies. Transitive dependencies are the dependencies declared in the POM file on other artifacts. Maven adds all the dependencies of the library to the project’s dependencies thus implicitly resolving conflicts with its default behavior. Maven uses the dependencies already present in the local repository, built from a local project (or loaded from remote repository) even if the same local project fails to build currently. Also if a maven-dependency-project in the middle of the maven dependency chain is rebuild and loaded in the local or remote repository, there is no need to build all the other dependency projects dependent on such newly built dependency-project.

Imagine a project divided into multiple components, with each component dependent on the other to compile itself such as the traditional 3 tier architecture. Multi-module project type in maven can be used to tackle this issue. A multi-module project doesn't produce any new artifact and is composed of several other projects known as modules. In a multi-module project, maven propagates all the commands made on the project, to be executed on its child projects by automatically discovering the correct execution order and detecting the circular dependencies. The multi-module project is setup as follows:

1) Create a new project directory containing all the child projects.
2) Create a new pom file with new artifactId and packaging as "pom".
3) Declare a modules section in the POM file with the child modules for each child project located in the sub-directories.

   simple-weather
   simple-webapp

  It is vital to specify the repository info in the pom.xml in order to download all the dependent jars and maven plugins. The repositories section in the pom is used for the same purpose and list all the available repositories using the repository elements. The distribution management section on the other hand also specifies the repository info, but it is only used for distribution of the artifact and supporting files generated throughout the build process. The pluginRepositories section similar to the repositories section or element, specifies a remote location were maven could find new plugins using the pluginRepository elements. A continuous integration management section using ciManagement element enables to specify the build system being used by the project and the URL for the job. Also a notifiers settings can be specified to configure email address to trigger emails based on build status. A source code management section can be used to provide information regarding the version control system for the project as in the example below. The connection parameter requires read access for maven to access the source code, while the developerconnection requires write access to access the source code. The url parameter specifies the view to browse the repository.
  
  https://mercurial.local.com/repo/lmo/project-repo/
  scm:hg:ssh://hg@mercurial.local.com//mercurial/lmo/project-repo/
  scm:hg:ssh://hg@mercurial.local.com//mercurial/lmo/project-repo/


Maven ensures that the builds are portable with the build configuration in the POM.xml avoiding references to the local file system and depending more on the metadata from the repository. But there are circumstances when the build configuration requires slight changes in the dependency, or path to the local file system or extra steps in the its configuration making a single build configuration impossible to work for different environments. To handle such case, maven introduced a concept of build profile. The profile consists of configuration for a subset of elements in the POM which modify the POM during the build time giving different build results based on the environment. The -f option allows to create a another POM based on the build parameters, inorder to make the build configuration more maintainable. Profiles can also be specified in the maven settings.xml file to configure different repositories for example. Profiles in the settings.xml can be activated in the Maven settings, via the <activeProfiles> section which takes a list of <activeProfile> elements, each containing a profile-id inside. Profiles listed in the <activeProfiles> tag would be activated by default every time a project uses it. Below is a sample profile section in the settings.xml:
<profiles>
 <profile>
  <id>ext-plugins</id>
  <pluginRepositories>
   <pluginRepository>
    <id>extPlugins-releases</id>
    <url>http://dx.server.com:8080/nexus/content/repositories/releases
    </url>
   </pluginRepository>
   <pluginRepository>
    <id>extPlugins-snapshot</id>
    <url>http://dx.server.com:8080/nexus/content/repositories/snapshots
    </url>
   </pluginRepository>
  </pluginRepositories>
 </profile>
</profiles>

<activeProfiles>
 <activeProfile>ext-plugins</activeProfile>
</activeProfiles>

Maven also supports running Ant tasks or targets embedded in the POM using the Maven Antrun Plugin. It helps in easy migration from Ant scripts to Maven and also provides a way to execute custom commands in the build step. In order to execute the 'run' goal in maven-antrun plugin, it needs to be binded to the 'validate' phase of the maven lifecycle. Also to execute conditional ant tasks such as 'if' or 'equals', a reference to antcontrib is required. Hence we include a task definition with the resource "net/sf/antcontrib/antlib.xml" added to the "maven.plugin.classpath". Further ant-contrib and ant-nodeps dependencies are added in the plugins section as follows.
<build>
 <plugins>
 <plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-antrun-plugin</artifactId>
  <version>1.6</version>
  <executions>
   <execution>
    <id>prepare</id>
    <phase>validate</phase>
    <configuration>
     <tasks>
      <taskdef resource="net/sf/antcontrib/antlib.xml" classpathref="maven.plugin.classpath" />
      <propertyregex property="ear.artifactId" input="${artifactId}" regexp="-web$" 
          replace="-ear" global="true" defaultValue="${artifactId}" />
      <echo message="checking ${site.root}\${artifactId}\${buildenv}\*.zip exists" />
      <fileset dir="${site.root}\${artifactId}\${buildenv}" includes="**/*.zip" id="checkdir"/> 
      <if>
       <equals arg1="${toString:checkdir}" arg2="" /> 
<!--       <available file="${site.root}\${artifactId}\${buildenv}\*.zip" />  -->
       <then>
        <echo>Zip file does not exists</echo>
       </then>
       <else>
        <echo message="extracting ${site.root}\${artifactId}\${buildenv}\*.zip" />
        <unzip dest="${site.root}\${artifactId}\${buildenv}\">
         <fileset dir="${site.root}\${artifactId}\${buildenv}" includes="**/*.zip" />
        </unzip> 
        <delete>
         <fileset dir="${site.root}\${artifactId}\${buildenv}" includes="**/*.zip"/>
        </delete>
       </else>
      </if>
     </tasks>
    </configuration>
    <goals>
     <goal>run</goal>
    </goals>
   </execution>
  </executions>
  <dependencies>
    <dependency>
      <groupId>ant-contrib</groupId>
      <artifactId>ant-contrib</artifactId>
      <version>1.0b3</version>
      <exclusions>
         <exclusion>
           <groupId>ant</groupId>
           <artifactId>ant</artifactId>
         </exclusion>
      </exclusions>
    </dependency>
    <dependency>
      <groupId>org.apache.ant</groupId>
      <artifactId>ant-nodeps</artifactId>
      <version>1.8.1</version>
    </dependency>
  </dependencies>
 </plugin>
 </plugins>
</build>

Reporting Section
Maven provides a reporting section (element) which allows to include additional reporting plugins for maven site generation. The maven site contains the current status details of the project, such as dependencies information, module information, java docs etc. Reports can be ran separately on individual modules or can be aggregated in case of aggregator reports by defining reportSets for plugins. Also inherited element specifies whether the report plugin should be applied to child projects POMs which is inherited from the current project. By default the value of inheriting the plugin configuration is true. Although such option in not available for the build plugins.

Running Maven in Eclipse
In order to run maven commands in Eclipse as an external tool follow the below steps:
  1. Install maven (preferably maven 3.0) on the machine and locate its path (C:\Program file\..).
  2. In eclipse, from the menu bar, select Window > Preferences. Select the Run/Debug (expand it) > String Substitution. Add a new variable e.g. maven_exec and select the file "mvn.bat" for value using the installed maven location.
  3. Set up a new external launcher from the menu bar, select Run > External Tools > External Tool Configurations. Then select Program from the menu.
  4. Create each maven task, for example: the task for build classpath,
    1. On program, create a new program and name as: build_classpath (any name as you wish) 
    2. In location box: choose our created variable: maven_exec 
    3. In Working Directory: choose variable: ${workspace_loc}
    4. In Argument, give maven command, in here is: eclipse:clean eclipse:eclipse 
    5. Click on Apply and Run
  5. So one can execute the external tool program by execute Run button, and the result should be in console tab.


Maven Commands:

1) Following are some of the most used maven command-line options:

-cpu : Check for plugin updates
-D    : Define a system property
-e     : Display execution error messages
-f     : Force to use alternate POM file.
-fae  : Only fail the build at the end, allow other builds to continue
-ff    : Stop at first failure in the build
-fn   : Never fail the build regardless of the project result (mostly used when test fails in local build)
-N    : Do not recurse into sub projects
-npu : Do not check for updates of any plugin releases
-o     : Work in offline mode (used to avoid getting new updates, sandboxing local development)
-rf    : Resume from the specified project
-U    : Forced to check for updated snapshots and releases on remote repository
-up   : Similar to -cpu, updates plugins
-X    : Produce debug execution output.


2)  The archetype creation goal looks for an archetype with a given groupId, artifactId, and version and retrieves it from the remote repository. It is then processed against a set of user parameters to create a working Maven project. (Deprecated):
mvn archetype:create  -DgroupId=org.apache.maven.plugin.my  -DartifactId=maven-my-plugin  -DarchetypeArtifactId=maven-archetype-mojo

3)  Generates a new project from an archetype in a directory corresponding to its artifactId, or updates the actual project if using a partial archetype in the current directory:
mvn archetype:generate -DgroupId=org.apache.maven.plugins -DartifactId=link-globals -DpackageName=org.apache.maven.plugins.http -Dversion=1.0-SNAPSHOT

Choose a number or apply filter: ... : 233: 2
Choose a number: 2: 2

4)  Generates eclipse configuration files such as .project and .classpath files, .setting/org.eclipse.jdt.core.prefs file with project specific compiler settings, and various configuration files for WTP (Web Tools Project) is wtpversion is specified:
mvn eclipse:eclipse

Note: If we didn't make any typos in group/artifactId's Eclipse should be able to resolve the dependencies; this is because Eclipse has something called workspace resolution which should be turned on by default. The workspace resolution basically means 'look in the project first, and then look in the Maven repository'. This mechanism allows to edit modules and have the changes immediately visible in other modules (including dependency updates), without having to do a mvn clean install first to get the updated module into your local m2 repository. The versions have to match up however, so if we want changes to be visible, we should refer to the latest -SNAPSHOT release.

The --resume-from option alllows to continue from the speicifed module if there is any failure in prior modules.
mvn eclipse:eclipse --resume-from module-c.war

The eclipse plugin creates subprojects for the dependencies which exists in the reactor. In case working with the deployed packages is preferred over deveopment code, useProjectPreferences is set to false as below:
mvn eclipse:eclipse -Declipse.useProjectReferences=false

5) Deletes the .project, .classpath, .wtpmodules files and .settings folder used by Eclipse:
mvn eclipse:clean

6) Adds the classpath variable M2_REPO to eclipse in order to recognize the dependencies (eclipse:add-maven-repo which also did the same is currently deprecated):
mvn eclipse:configure-workspace -Declipse.workspace=<path to the workspace> 

mvn eclipse:add-maven-repo -Declipse.workspace=<path to the workspace>

7) Skips the execution of tests for a particular project across all modules. It is a property defined in Maven Surefire plugin.
mvn -DskipTests=true
mvn -DskipTests

8)  Skips the compilation of the unit tests. The maven.test.skip properties works along with Surefire Failsafe and the Compiler Plugin to skip compilation of all tests:
mvn install -Dmaven.test.skip=true
mvn install -Dmaven.test.skip

9)  Installs an artifact into local repository and skips the execution of integration tests
mvn -DskipITs=true install

10)  Runs all the integration tests and also builds a package
mvn verify

11) To continue and build a project even when the Surefire plugin encounters failed test cases:
mvn test -Dmaven.test.failure.ignore=true

12) Executes all the integration tests which are wired using the Failsafe plugin:
mvn integration-test

13)  Delete all the classes, compile skipping all tests and if failure resume from specified project:
mvn clean install --resume-from mamos-services-web -Dmaven.test.skip=true

14) Makes sure it gets latest snapshot from the server:
mvn -U install

15) Dependency tree lists all dependencies with child dependencies. Dependency resolve finds all the resolved dependencies from the repository (showing the latest available release versions):
mvn dependency:tree >%temp%\dep.txt
mvn dependency:resolve

16)  Executes the java main class using the Exec plugin from Codehaus mojo project:
mvn exec:java -Dexec.mainClass=org.sonatype.mavenbook.weather.Main

17)  Assembles an application bundle or distribution from an assembly descriptor in an archive format, by grouping the files, directories, and dependencies.
mvn install assembly:single

18)  Adds the manually downloaded jar to the maven local (install:install-file) and remote  (deploy:deploy-file) repository respectively with the specified groupId and artifactId.
mvn install:install-file -Dfile=rally-rest-api-1.0.6.jar -DgroupId=rally-rest-api -DartifactId=rally-rest-api -Dversion=1.0.6 -Dpackaging=jar

mvn deploy:deploy-file -Dfile=rally-rest-api-1.0.6.jar -DgroupId=com.rallydev -DartifactId=rally-rest-api -Dversion=1.0.6 -Dpackaging=jar -Durl=http://reposerver.com/nexus/content/repositories/thirdparty/

19)  Deletes or purges the specified maven groupId from the local repository. If no manualInclude is specified then deletes all the contents of the repository. This helps to remove the old dependencies loaded in local repository which may not be present in the central repository avoiding build related issues.
mvn org.apache.maven.plugins:maven-dependency-plugin:2.6:purge-local-repository -DmanualInclude=org.springframework

20)  Maven release-plugin is used to release the project and increment the development version. It performs the project release operation in three steps, prepare, release and clean.
  The release:prepare step removes all the SNAPSHOT versions from the POMs by default, run the tests, commit the POMs, tag the release version, increments the SNAPSHOT pom version and commits the modified POM's. It provides many options to specify release version details. The 'releaseVersion' and 'developmentVersion' parameters can be used to determine the release and development version respectively. Otherwise the user will be prompted for these values. The 'ignoreSnapshots' option avoids removing all the SNAPSHOT versions. Also specifying 'updateDependencies' prevents from updating the dependencies version to the next development version. The 'pushChanges' parameter is implemented for Git and determines whether the changes should be pushed to the remote repository. The 'scmCommentPrefix' is used to add a customized message/comment while pushing the changes.
  The release:perform step checks out from an SCM url and creates the build using deploy maven goal. In case required to rollback, the release:rollback step can be executed if clean release is not executed. It reverts all the POM changes and removes the release tag from the SCM.  Finally the release:clean step deletes the release descriptor and all the backup POM files.

mvn release:prepare -DreleaseVersion=1.0 -DdevelopmentVersion=1.1-SNAPSHOT -DignoreSnapshots -DupdateDependencies=false
mvn release:perform
mvn release:clean

In case we don't want to push the release plugin changes to the source control, the pushChanges parameter can be passed as false.
mvn release:prepare -DpushChanges=false

21)  Increase the number of concurrent download threads in order to download maven dependencies, thus speeding up the build process especially when building for the first time.
mvn -Dmaven.artifact.threads=4

22)  Managing Failures: Maven proposes three different ways for managing failures in reactor builds; fail-fast (default), fail-at-end and fail-never.
  1. Fail Fast policy stops the reactor build after the first failing module or project. Despite used by default, it can be enabled using: --fail-fast or -ff parameter.
  2. Fail At End policy fails the build afterward and allows all non-impacted builds to continue. To enable this policy, use the --fail-at-end or -fae parameter. This option avoid issue propagation and the global build is also considered as failed.
  3. Fail Never policy never fails the build, regardless of the project result. All the failures are ignore, the build just continues. It can be enabled by using --fail-never or -fn parameter.
mvn clean install --fail-at-end
mvn clean install --fail-never

23)  Maven help plugin enables to describe the goals i.e. the attributes of the specified plugin or Mojo.
mvn help:describe -Dplugin=pluginname

24)  Compile the project and execute the specified main class using the '-exec.mainClass' parameter.
mvn compile exec:java -Dexec.mainClass=com.company.application.Test

25)  Maven dependency plugin copies the dependencies from the repository to a defined location (by default the application target directory).
mvn dependency:copy-dependencies

26)  Download (resolve) the source code and java docs for each of the dependencies in the pom file.
mvn dependency:sources
mvn dependency:resolve -Dclassifier=javadoc

Monday, November 12, 2012

In-Memory Database using Hibernate and HQL


Unit Testing is an integral part of any software development cycle. A large project involves many database systems, each separated by its role in business systems. Also many services are spawned relying on the database to provide refined results based on client input. As each database may reside on different machines, they require multiple connections. Further schema for each business operation may vary and would have highly normalized set of tables. Normalization though good for reducing redundancy, unfolds a new hurdle of inter-dependency between the tables increasing the effort to insert dummy records for testing purposes. Such case gets worse when the records are referenced between tables, across multiple databases. Inserting records for test purposes would suffice to resolve such issues, but in a large co-operations with numerous cross-continent teams working in collaboration requiring to access the same database, makes it hard to ensure records stay untouched. One work around would be to insert and delete the records for each test, but it not only keeps the connection occupied for unit tests but makes the database fragmented with sequences running out of their range. The basic need for a database for a unit test is to ensure the logic of insertion and fetching of records work as expected along with the queries. The content of the records though need not vary for the assertions in the Junit to work. In such circumstances there is nothing more helpful than to setup an in-memory database.
    HSQLDB is an ideal choice for in-memory embeddable database, especially for java applications as its written in Java and it integrates well with Spring, Hibernate and JPA. Also it support most of the SQL-2008 standards and provides a fast lightweight database alternative for in-memory usage. The default username is "SA" and default password is empty. In case a new user id and password is provided, hibernate creates the new user but does not provide the necessary privileges to insert in the created tables. In such case, privileges must be granted by executing the HQLDB query such as "GRANT ALL ON SCHEMA_NAME.TABLE_NAME TO PUBLIC". Hence it is preferred to use super admin account for all the in-memory database operations.
The jdbc url for in-memory database starts with "jdbc:hsqldb:mem:" were "mem" signifies in-memory database protocol identifier. HSQLDB also has other protocol identifiers as follows:

  • memstored entirely in RAM - without any persistence beyond the JVM process's life
  • filestored in filesystem files
  • resstored in a Java resource, such as a Jar and always read-only
  • hsql and hsqls: connects to a local or in network database server. The host and port specify the IP address or host name of the server and an optional port number. The database to connect to is specified by an alias which is a lowercase string defined in the server.properties file .
  • http and https: connects to a remote database server based on network domain similar as above.

In the hibernate configuration of HqlDB, the hibernate.hbm2ddl.auto property automatically validates or exports schema DDL to the database when the SessionFactory is created. With create-drop option, the database schema will be created when SessionFactory is initialized and dropped when the SessionFactory is closed explicitly. Hibernate uses the hbm mappings generated from the database to create the tables for the in-memory database schema. This feature makes it easier to load the database without generating complex sql scripts. Here are the other possible options for the hibernate.hbm2ddl.auto property:
  • validate: validates the schema, makes no changes to the database.
  • update: updates the schema.
  • create: creates the schema, destroying the previous data.
  • create-drop: creates schema at the start of session and drops the schema at the end of the session.

The hibernate.cache.use_query_cache property is used to enable caching of query result sets. This is only useful for queries that run frequently with the same parameters. The query cache doesn't cache the state of the actual entities in the result set; but only caches the identifier values and results of value type.
     In hibernate world, a session is a transaction-level cache of persistent data which is also known as second-level cache. The second-level cache exists as long as the session is alive and holds all the data for properties and associations for individual entities which are marked to be cached. The hibernate.cache.use_second_level_cache property can be used to enable and disable second level caching which is enabled by default. The hibernate.cache.provider_class property tells the hibernate the caching implementation to use, which implements the org.hibernate.cache.CacheProvider class. Hibernate is bundled with a number of built-in integrations with the open-source cache providers (including provider.org.hibernate.cache.EhCacheProvider) which can be used as the hibernate cache providers. The hibernate.cache.region.factory_class property specifies the implementation to be used to build the second level cache regions. Ehcache is a widely used open source java cache used for general purpose caching and in light weight containers. Since Hibernate 2.1, Hibernate has included an Ehcache CacheProvider and it is periodically synced up with the provider in the Ehcache Core distribution. It is required to specify both the EHCache Provider and the RegionFactory in order to avoid exceptions such as "org.hibernate.cache.NoCachingEnabledException: Second-level cache is not enabled". Below is the sample hibernate spring configuration for in-memory database for HQLDB:

 <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">    
  <property name="driverClassName" value="org.hsqldb.jdbc.JDBCDriver"/>     
  <property name="url" value="jdbc:hsqldb:mem:mydb"/>     
  <property name="username" value="sa"/>     
  <property name="password" value=""/> 
 </bean>
 
 <bean id="sessionFactory"
  class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" depends-on="hsqlSchemaCreator">
  <property name="configLocation">
   <value>classpath:measurementapicore-hibernate-mapping/hibernate.cfg.xml</value>
  </property>
  <property name="dataSource" ref="dataSource" />
  <property name="hibernateProperties">
   <props>
    <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect</prop>
    <prop key="hibernate.show_sql">false</prop>
    <prop key="hibernate.format_sql">false</prop>                                                            
    <prop key="hibernate.lazy">false</prop>
    <prop key="hibernate.pretty">true</prop>
    <prop key="hibernate.cache.use_query_cache">true</prop>                                                  
    <prop key="hibernate.cache.provider_class">net.sf.ehcache.hibernate.EhCacheProvider</prop>
    <prop key="hibernate.cache.region.factory_class">net.sf.ehcache.hibernate.EhCacheRegionFactory</prop>
    <prop key="hibernate.generate_statistics">true</prop>
    <prop key="hibernate.hbm2ddl.auto">create-drop</prop>
    <prop key="hibernate.connection.autocommit">true</prop>
   </props>
  </property>
 </bean>
 
 <bean id="hsqlSchemaCreator" class="com.emprovise.configuration.HSQLSchemaCreator">
        <property name="dataSource" ref="dataSource" />
        <property name="schema" value="CONFIG, SHRDM" />
    </bean>

 <jdbc:embedded-database id="embedded" type="HSQL"/> 
  <jdbc:initialize-database data-source="dataSource">     
  <jdbc:script location="classpath:dbschema/shrdm_data_setup.sql"/> 
  <jdbc:script location="classpath:dbschema/config_data_setup.sql"/> 
 </jdbc:initialize-database> 



Although the above configuration (except the hsqlSchemaCreator Bean) is sufficient for the Database were all tables reside in single schema, it does bring some challenges when trying to load tables from multiple schemas. Hibernate allows to configure the default schema by setting the hibernate.default_schema property which is used by the HQLDB In-Memory database. But if we refer tables from different schemas, we add "schema" attribute to the hibernate-mapping for criteria queries to work. The named queries also use the schema name followed by the table name to refer to corresponding table in the schema. The hibernate create-drop feature uses the hibernate default_schema name as the schema for all the tables created if none of the tables have schema attribute set in the hibernate mapping files. When the schema attribute is set for all the tables in the hibernate mapping files, the default_schema property is overridden. While creating the HQL database in such case hibernate expects the all the schemas to be already present and won't create the schema before creating the tables. As a result it throws errors such as table does not exists, invalid schema name etc when trying to create the tables in non existing schema and fails in any query. The solution for this problem is provided by one of the fellow blogger. A new class called schemaCreator is created a below along with the corresponding bean, and is referred in the "depends-on" attribute of the sessionFactory bean in order for the spring container to create the dependent bean before the sessionFactory bean. The schemaCreator takes schema values as comma separated parameters and creates the schema using plain old spring jdbc template.

public final class HSQLSchemaCreator implements InitializingBean {
    private String schema;
    private DataSource dataSource;

    // setters and getters
    public String getSchema() {
        return schema;
    }

    public void setSchema(String schema) {
        this.schema = schema;
    }

    public DataSource getDataSource() {
        return dataSource;
    }

    public void setDataSource(DataSource dataSource) {
        this.dataSource = dataSource;
    }

    public void afterPropertiesSet() throws Exception {

     if(schema != null) {
      
      StringTokenizer stringTokenizer = new StringTokenizer(schema, ",");
      JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
      
      while(stringTokenizer.hasMoreTokens()) {
         String nextToken = stringTokenizer.nextToken();
         jdbcTemplate.execute("CREATE SCHEMA " + nextToken + " AUTHORIZATION DBA"); 
      }
     }
    }
}


Monday, November 5, 2012

Acceptance Testing: Cucumber JVM

Cucumber-JVM is a java version of the popular Cucumber BDD tool for Ruby platform. The cucumber community (cukes) is one of the most vibrant community and are expanding the framework from Ruby to Java, .NET, Python, Perl, PHP etc. The core festures of cucumber being similar to jbehave with a story (feature) file, corresponding scenario implementation and an entry point class to execute all the stories. But there are some key differences between them which are discussed as follows:

  1. All the parameters in the story are parsed using regular expression instead of parsing the parameter matcher in jbehave ($ by default).
  2. JBehave allows to extend scenario implementation classes (i.e. StorySteps class) in order to reuse the scenarios. Cucumber on the other hand directly finds the scenario implementation for the story regardless of their classes, and blocks extending the implementation classes.
  3. In JBehave, only single instance of the Step Definition class (scenario implementation classes) is maintained during the execution of the story retaining the values of instance variables. In cucumber though, for each scenario a new instance of the Step Definition class is created and all the previous values of instance fields are lost.
  4. JBehave support annotations such as @BeforeStory and @AfterStory, which allow to execute the methods before and after the entire execution of the story respectively. Cucumber on the other hand has @Before and @After annotation which by default execute before or after every scenario. If a parameter is passed along with the annotation, such as @Before("@SETUP") or @After("@SETUP"), then the method will be executed before or after the scenario with the tag "@SETUP" in the feature file.
  5. JBehave is flexible to have the Step Definition class (scenario method implementation) anywhere in the package structure, but requires the story entry point class (AllStories) to specify the instance of the class. On the other hand Cucumber-Jvm mandates to have the Step Definition classes in the same package of the story entry point class (AllStories). This enables cucumber to automatically find the implementation methods for the scenario specified in the story.
  6. The tabular input format in Jbehave uses a ExampleTable class, which is a list of maps, each map representing a row, with table header's as the key to retrieve row values. In contrast to this approach, cucumber-jvm requires to create classes representing the table row structure. Cucumber then returns a List of Objects of the table type created earlier. This also helps to classify the text fields from the numeric fields using the data types of the instance variables.
  7. The Configuration class for JBehave provides rich set of customization from Reports, Input Parameter converters, and story path. While cucumber does provide some of configuration options, major customization still doesn't seems to be straight forward.

The configuration of cucumber as mentioned above consists of an entry point class to execute all the features in the feature path. It loads the features using the Cucumber class providing options for execution and report generation. Below is the list of options available:
  1. tags: specify the tagged scenarios and stories to execute or to skip. Only run scenarios tagged with tags matching TAG_EXPRESSION.
  2. strict: Usually, when cucumber can’t find a matching Step Definition the step gets marked as yellow, and all subsequent steps in the scenario are skipped. The strict option causes Cucumber to exit with 1 for pending and undefined steps.
  3. format: specifies how the results are formatted. Available formats: junit, html, pretty, progress, json, pretty:
    html: Generate an html report in the targeted location
    json: Generate a compact json report in the targeted location
    json-pretty: Generate a well formatted json report in the targeted location
    junit: Generate a cucumber junit report in the targeted location (xml format)
    progress: It causes a regular JUnit test to be stuck at yellow
     
  4. features: specifies the path to the feature file (story). E.g. @Cucumber.Options(features = "classpath:simple_text_munger.feature")
  5. glue: specifies the path where glue code (step definitions and hooks) is loaded from.
  6. name: runs only the scenarios whose names match REGEXP.
  7. dry-run: skips execution of glue code.
  8. monochrome: doesn't color terminal output.

Below is the code which loads Cucumber feature files using Cucumber class with the options as described above.

@RunWith(Cucumber.class)

@Cucumber.Options(tags = { "~@WIP", "~@BROKEN" }, strict = true, 

     format = { "pretty", "html:target/cucumber", "json-pretty:target/cucumber.json" })

public class AllStories { }


Features in Cucumber-JVM are similar to the jbehave stories with Given-When-Then scenarios and support for tabular input as well. Further tags can be referenced in the feature file in order to tag scenarios and stories to execute or skip them.


@TESTS
Feature: Add a customer to the records.

@SETUP
Scenario: Customer account "John" is created with default settings.
Given a customer with the name "John" and table
        | ROW_ID | NAME | VALUE |
        | 3232323  | John12  | abc        |
        | 6454560  | John42  | xyz        |

When a customer tries to create an account
Then get an customer account id which is not null and greater than zero

Similar to Jbehave, cucumber also provides step definitions for execution of the scenarios in the features. As mentioned above, the step definition class cannot be extended for reuse, but cucumber automatically scans the package of its entry-point class, to find the step definitions for the corresponding scenarios. Further @StepDefAnnotation is used to mark the class of step definitions, later scanned by cucumber-jvm.

@StepDefAnnotation
public class OrgTerminalMachineSetupSteps{

@Before("@SETUP")
public void cleanup(){ ... }

@Given("^a customer with the name \"([^\"]*)\" and table$")
public void a_customer_with_the_name(String customerName, List<Row> list) throws Throwable { .. }

@When("^a customer tries to create an account$")
public void dealer_tries_to_create_an_account() throws Throwable { .. }

@Then("^get an customer account id which is not null and greater than zero$")
public void get_customer_accid_not_null_and_greater_thanzero() throws Throwable { .. }

  class Row {
    public String rOW_ID;
    public String nAME;           
    public String vALUE;
   }
}


For each step execution of the scenario in the feature, cucumber scans and finds the step definitions. Then it creates the instance of Step-Definition class before executing each scenario and executes the corresponding step methods. So in case the scenarios are required to be inter dependent in order to carry out an operation, all the instance fields of step definition class need to be singletons. So either a singleton factory class can be used to get field instance or spring can be used to inject such instances.
     Cucumber supports spring integration and requires "cucumber-spring" jar and "cucumber.xml" file in the source main resources directory. The cucumber.xml specifies the beans or component scans to load the beans required for cucumber acceptance tests. Also the spring config files can be imported into cucumber.xml for more organized configuration. The cucumber.xml is loaded by default using cucumber-spring before it initializes the step definition classes for tests execution. All the across scenario fields should be Autowired to grab the instances loaded by cucumber.xml. With such an spring integration, it allows to maintain the field instances across scenarios, access properties and take advantage of most of the spring related features.
     Moving ahead with the Jenkins setup for running the cucumber tests, it is necessary to run the tests in maven using maven-failsafe-plugin. The problem though with the failsafe-plugin is it requires all the tests to be inside the source test directory instead of source main directory. Although this seems logical as we are running tests and not any development code, it does require to load all the spring related beans from cucumber.xml in test resources by importing spring config files in the main resources directory. This seemed not to be working with both the spring configs (in test and main directories) and none of the beans were loaded. Copying all the spring related configuration files from the main resources folder to the test resources allows only to load/component scan the beans from the classes present in its codebase test or main. Hence the only solution we found is to copy all the source from main to test directory which seemed a lot of change. To avoid such major change for just running the tests using the maven-plugin, the configuration of the maven plugin was modified to load all the tests from the source main directory. Below are the changes and the config of the maven-failsafe-plugin:

 <plugin>

   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-failsafe-plugin</artifactId>
   <version>2.12</version>

   <configuration>
     <includes>
       <include>**/AllStories.java</include>
     </includes>
     <testSourceDirectory>${project.build.sourceDirectory}</testSourceDirectory>
     <testClassesDirectory>${project.build.outputDirectory}</testClassesDirectory>
     <reportsDirectory>${project.build.outputDirectory}/failsafe-reports</reportsDirectory>
     <additionalClasspathElements>
       <additionalClasspathElement>${project.build.sourceDirectory}/resources</additionalClasspathElement>
     </additionalClasspathElements>
   </configuration>

   <executions>
     <execution>
       <id>integration-test</id>
       <goals>
         <goal>integration-test</goal>
         <goal>verify</goal>
       </goals>
     </execution>
   </executions>

 </plugin>


The above changes in the testSourceDirectory and testClassesDirectory causes the maven-plugin to change its path to load the tests from the source main directory, thus running the acceptance test. Moving on to the Jenkins configuration, cucumber provides a nice plugin for Jenkins which enables it to provide well organized reports. The configuration for the Cucumber-Reports (latest version 0.0.14) Jenkins plugin is very simple as described in the documentation. The Json Report generated is usually in the target folder by default, hence we specify "Json Reports Path" as target. Also the "Plugin Url Path" is used to make the ""Back To Jenkins" link work in the Cucumber Reports by pointing to the right Jenkins Url.



One important note while running the Cucumber-Reports plugin: In the feature file if there is only Scenario wihout any Given-When-Then statements, then the cucumber tests fo run and generate the report in json. But the generated json report cannot be parsed by the cucumber-reports and it throws below exception,

[CucumberReportPublisher] Compiling Cucumber Html Reports ...
[CucumberReportPublisher] copying json from: file:/c:/.jenkins/workspace/cucumber-acceptance-tests/to reports directory: file:/e:/.jenkins/jobs/cucumber-acceptance-tests/builds/2012-11-01_16-13-02/cucumber-html-reports/
[CucumberReportPublisher] Generating HTML reports
ERROR: Publisher net.masterthought.jenkins.CucumberReportPublisher aborted due to exception
java.lang.NullPointerException
at net.masterthought.cucumber.util.Util.collectSteps(Util.java:104)

The reason behind it is the cucumber-reports plugin expects the scenarios to at least contain a Given statement in order to parse the generated json report successfully. Hence if we specify the scenario with atleast a Given step as below, the cucumber-reports jenkins plugin generates the report successfully.


@SETUP
Scenario: Setup.
Given Something

Although there are still some unresolved issues with the Cucumber-Reports Jenkins plugin. In the Cucumber-Reports in the Feature Statistics table, the time duration populated is "35 secs and 55 ms" but actually its supposed to be around 30 minutes. Also in the Feature Report details we see a message such as "Result was missing for this step". This message is displayed because the json report generated by cucumber doesn't have the result section in the report for every step: "result": { "duration": 776000, "status": "passed" }
If cucumber-jvm version 1.0.14 the json report does not have result section, but if 1.0.8 or 1.0.9 is used the json report does contain the result section. The cucumber-report plugin both version 0.0.14 and 0.0.12
cannot parse the result section the json report generated and the issue still persists. An quick fix will be to try using cucumber-reports jenkins plugin version 0.0.9 as shown the web documentation or wait till the issue is resolved in the later versions.

Sunday, November 4, 2012

Acceptance Testing: JBehave



Acceptance testing is one of the crucial phase in product testing as it determines whether the system operates based on the specifications set. It ensures that the system functions as expected, integrating with numerous components/services to provide accurate results. Such automated testing of the product as a whole, based on a pre-decided set of scenarios (mainly from testers) ensures that we catch the faults before the manual testing takes over. It not only saves time for both testers/developers but also boosts developer confidence while making crucial changes in legacy code. There could be various approaches followed to write acceptance tests. Either the data needed for the test is created from scratch in a regular or in-memory database before and deleted once the test is completed in case of the regular database, or a static database for acceptance test is used\maintained were the required data needed for the test is essentially always present.

  There are 5 core principles for writing acceptance tests mentioned as below:
  1. Acceptance tests should be isolated and external to the application under test.
  2. Acceptance tests should be executed against the live application.
  3. Acceptance tests should be independent of any development environments.
  4. Acceptance tests should always be executed against actual data.
  5. Acceptance tests should imitate the manual verification criteria.
  With all said about the advantages of acceptance testing, there are two major Java frameworks supporting such testing, mainly JBehave and Cucumber. Both have a basic idea of writing stories which contain various test scenarios, using Give-When-Then clauses. All the scenarios are executed by mapping Give-When-Then clauses to corresponding methods and executing the mapped methods based on the order in the story. Upon the completion of execution, a report is generated based on the story and providing the execution results. But    JBehave and Cucumber are differ in some aspects of their workings. JBehave on one hand requires the story to be tightly coupled with its java implementation class, Cucumber only requires such coupling based on the scenarios in the story, irrespective of its implementation class. Lets dive in to have a closer look at each of the frameworks.

JBehave
JBehave is been quite a framework for acceptance testing has most of the basic set of features such as reusing scenarios, skip scenarios, html/json/xml/text reporting, running multiple stories, jenkins plugin etc.

In maven world, jbehave can be configured by adding a dependency in the pom.xml for "jbehave-core" (version=3.6.8) in the group "org.jbehave". Also in order to execute all the stories using maven (mvn integration-test) a plugin entry must be added in the plugins section as follows:
      <plugin>

        <groupId>org.jbehave</groupId>

        <artifactId>jbehave-maven-plugin</artifactId>

        <version>${jbehave.core.version}</version>

        <executions>

          <execution>

            <id>unpack-view-resources</id>

            <phase>process-resources</phase>

            <goals>

              <goal>unpack-view-resources</goal>

            </goals>

          </execution>

          <execution>

            <id>embeddable-stories</id>

            <phase>integration-test</phase>

            <configuration>

              <includes>

                <include>${embeddables}</include>                        <!-- include all stories -->

              </includes>

              <excludes />

              <storyTimeoutInSecs>5200</storyTimeoutInSecs>

              <generateViewAfterStories>true</generateViewAfterStories>

              <ignoreFailureInStories>false</ignoreFailureInStories>

              <ignoreFailureInView>false</ignoreFailureInView>

              <threads>1</threads>

              <metaFilters>

                <metaFilter>-skip</metaFilter>     <!-- specify annotation to filter and skip the scenario -->

              </metaFilters>

            </configuration>

            <goals>

              <goal>run-stories-as-embeddables</goal>

            </goals>

          </execution>

        </executions>

      </plugin>


Once maven is configured and ready, we can write story scenarios and its implementation. Now, in order to invoke all the stories from Eclipse, a java class inheriting JUnitStories is implemented which specifies the similar configuration as in the maven plugin above.

   public class AllStories extends JUnitStories {


    public AllStories() {

        configuredEmbedder().embedderControls()

        .doGenerateViewAfterStories(true)

        .doIgnoreFailureInStories(false)            // stop rest of the scenarios if any scenario fails

        .doIgnoreFailureInView(false)              //

        .useThreads(1)                                     // specify number of threads to use

        .useStoryTimeoutInSecs(300);           // story execution timout in seconds

        // specify annotation to filter and skip the scenario

        configuredEmbedder().useMetaFilters(Arrays.asList("-skip"));

    }


    public Configuration configuration() {

        Class<? extends Embeddable> embeddableClass = this.getClass();

        // Enables to decorate and format non-Html reports

        Properties viewResources = new Properties();

        viewResources.put("decorateNonHtml", "true");

        // Start from default ParameterConverters instance

        ParameterConverters parameterConverters = new ParameterConverters();

        // factory to allow parameter conversion and loading from external resources (used by StoryParser too)

        parameterConverters.addConverters(new DateConverter(new SimpleDateFormat("yyyy-MM-dd")));

        return new MostUsefulConfiguration()
        

   .useStoryControls(new StoryControls().doDryRun(false).doSkipScenariosAfterFailure(true))

            .useStoryLoader(new LoadFromClasspath(embeddableClass))

            .useStoryPathResolver(new UnderscoredCamelCaseResolver())

            .useStoryReporterBuilder(new StoryReporterBuilder()

                .withCodeLocation(CodeLocations.codeLocationFromClass(embeddableClass))

                .withDefaultFormats()

                .withPathResolver(new ResolveToPackagedName())

                .withViewResources(viewResources)

                // generates report in the following formats

                .withFormats(CONSOLE, TXT, HTML, XML)

                .withCrossReference(xref)

                // displays full exception stacktrace in the generated report

                .withFailureTrace(true).withFailureTraceCompression(true))

            .useParameterConverters(parameterConverters);
    }


   // Specify the class which implements the methods mapped to the scenarios

    public InjectableStepsFactory stepsFactory() {

        return new InstanceStepsFactory(configuration(), new Object[] { new StorySteps() });

    }


    // Specify the relative path to the stories with the stories to include and exclude.
    protected List<String> storyPaths() {
    
      String codeLocation = codeLocationFromClass(this.getClass()).getFile();    

      return new StoryFinder().findPaths(codeLocation, Arrays.asList("**/**/*.story"), Arrays.asList("**/excluded*.story"));

    }

The configuration above enables report generation by setting doGenerateViewAfterStories to true. It sets jbehave to stop the execution of scenarios in case of failure in executing any scenario. The execution is configured to run on a single thread in order to show an accurate execution duration in the report. Also it prints the scenario statements one by one with the debug results providing clear understanding. In order to prevent timeout of the story due to long service calls, it is set to a comfortable amount of 5 minutes. Also a "@skip" meta matcher is added in order to skip the scenarios in the story.
    In the configuration method we specify the properties in order to format non-Html reports. The formats in which reports are generated are, Text, Html, Xml and in Eclipse/Command console. The stack trace is enabled in the report on failure of the scenario using the withFailureTrace() method.
    After the configuration and story loader class is ready, we write the actual story with scenarios as follows:

Story: A customer with name "John" needs to setup a account.

Scenario: Customer account is created with default settings.
Given a customer with the name "John" with
| a | b | c |
| 1 | 0 | 1 |
| 2 | 6 | 4 |
When customer tries to create an account
Then get an customer account id which is not null and greater than 0
........
Scenario: ......
Meta: @skip
Given ......

Note that in the last scenario above we use the Meta information providing the property with the name "skip" but no value. The meta matchers can also be used as name-value pair such as "@ignore true".  The order of the scenarios in the story is the order of the execution of the scenarios.
   Moving forward we write the corresponding implementation for the scenarios in a class which is referenced in the stepsFactory() method of AllStories class. Annotations such as @Given, @When, @Then are used to bind the methods to the corresponding scenario's Given, When, Then clause. The @BeforeStory and @AfterStory annotations are used to initialize the story and clean up after its execution respectively. It is important to know that the statements following Given-When-Then in the story should match the ones in the annotations in order for the method to bind to the corresponding statement. Further we use quotes to highlight the parameters and "$" to identify the parameters for parsing. The identifying character for the parameter can be changed to "%" for example using the following statement in the configuration of AllStories class.
return new MostUsefulConfiguration().useStepPatternParser(new RegexPrefixCapturingPatternParser("%")) 

Further, the parameters parsed using "$" from the variables in the story are assigned to the method's parameters of the type String, Integer or ExampleTable. Any text in the appropriate position is converted to String while number is converted to Integer. The table specified is converted to ExampleTable, one of the JBehave object types. ExampleTable is mainly a list of Maps consisting of values from the table assigned to the header acting as the key for the Map.

public class StorySteps{

 ...........

@BeforeStory

public void initialize(){ .... }

 ................

@Given("a customer with the name \"$customerName\" with $someTable")

public void setupOrg(String customerName, ExamplesTable someTable) throws Exception {

    ....

    List<Map<String, String>> rows = someTable.getRows();

for (Map<String, String> row : rows) {

             String a = row.get("a");

             ......

        }
 }


@When("dealer tries to create an account")
public void whenCustomerTriesToCreateAnAccount() throws Exception {


@Then("get an customer account id which is not null and greater than $number")
public void thenGetAnCustomerAccountIdWhichIsNotNullAndGreaterThan(Integer number) {

    Assert.assertThat...

}


@AfterStory
public void cleanUp() throws Exception  { ... }

 .............

}


The reports generated by jbehave are impressive providing a list of all stories along with their execution time, total scenarios, success, failures etc. Each story then provides the details of its scenarios as Given-When-Then and, colored Green for success and Red with stacktrace for failure. The only odd thing in the report is the Given section which when in a tabular form in the story, gets converted to a chunk of text without indentation and spacing. Even using the "{trim=false}" property before the table doesn't work to preserve the spacing of the columns on the report.

Jbehave also provides a plugin for Jenkins, Continuous Integration system, which parses the report generated in xml format to provide Test statistics similar to Junit. The configuration is simple, in the "Post-build Actions" add "Publish testing tools result report", then add "JBehave-3.x". Usually the pattern "**/jbehave/*" works, but with more specific pattern such as "**/jbehave/stories.*.xml" it certainly works. The jenkins report of jbehave is nothing fancy but a list of all the scenarios executed or failed, and the current testing trend.

Next we will continue with our next discussion on Cucumber-JVM Framework.