Sunday, September 8, 2013

CXF WS Security

  1. Setup the WS Security in Weblogic
  2. Test it using SOAP UI Client
  3. Create CXF Client to Send Request with BST
  4. Receive the Response from CXF Client with Security Confirmation

Setting up WS Security in Weblogic


Oracle Weblogic Server 12c was used to configure with the client application. The client application is the EJB application with an EJB Stateless bean. It uses weblogic.jws.Policies and weblogic.jws.Policy classes to specify the location to the policy file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
@Stateless
@WebService(targetNamespace="http://ws-connector.sp.ttp.tsm.com", name="TTPSPService",
  portName="TTPSPPort", endpointInterface="com.tsm.ttp.sp.ws_connector.TTPSP")
@Policies( { @Policy(uri = "policy:TTPSP-Policy.xml") } )
public class TTPSP  implements com.tsm.ttp.sp.ws_connector.TTPSP {
 
    public TTPSP() {     }
 
   @Override
   public CheckEligibilityResponseType checkEligibility(CheckEligibilityRequestType input) {
 CheckEligibilityResponseType result= new CheckEligibilityResponseType();
 return result;
   }
   ...........
}

The policy file located in the “Project/ejbModule/META-INF/policies” folder. The policy.xml specified that it requires “WssX509V3Token11” to the Recipient which is believed to be the Binary Security Token. The algorithm-suite preferred was “Basic256”. Timestamp must be included and, both the Headers and Body be signed entirely. Also specified the requirement of security confirmation using “<sp:RequireSignatureConfirmation/>” element.

To generate the build first we need to generate the “wlfullclient.jar” for the current weblogic server. The JarBuilder is used to create wlfullclient.jar using the following command.
WL_HOME/server/lib> java -jar wljarbuilder.jar

In some cases the "weblogic.jws.Policies" or other packages maybe absent in the wlfullclient.jar recently created. In case of Weblogic 10.3.3, the packages weblogic.jws.Policies and weblogic.jws.Policy are not present in the “wseeclient.jar” and “wls-api.jar” jar files either. These packages (weblogic.jws.Policies) can be found in the jar file
“C:\oracle\Middleware\modules\*ws.api_1.0.0.0.jar*” for Weblogic 10.3.3 and in
“C:\Oracle\Middleware\modules\ws.api_2.0.0.0.jar” for Weblogic 12.1

In Eclipse, we create a “New EJB Project” and all the source packages are added in the “ejbModule” folder. Also the “ejbModule” contains the “META-INF” folder containing the “MANIFEST.MF” and the “policy.xml” files. Create a “lib” folder in the project and add the “wlfullclient-11.1.jar”, “ws.api_1.1.0.0.jar” and other jars required.

Now create a “New Enterprise Application Project” i.e. EAR Project naming it same as previous project with EAR appended at the end. During the creation configure the EAR settings to add J2EE module dependencies. The EAR Project can be created by right clicking the “Deployment Descriptor: Projectname” -> New -> Project -> EAR Project.

In order to configure the weblogic server with WS Security, we need to generate a keystore using java keytool as follows:

1)  Generate a new JKS Keystore with new Keypair:


keytool -genkeypair -alias bank: BANK -keyalg RSA -keysize 1024
        -validity 365 -keystore bank.jks
KeyStore Password: t1bank
                Enter key password for <bank: BANK>: t1bank

2)  Export a certificate from the generate keystore:


keytool -exportcert -alias bank: BANK -file bank.cer -keystore bank.jks
Enter keystore password:
Certificate stored in file <bank.cer>

Now to configure the keystore in weblogic we have two choices, one is to add the keys from the bank.jks keystore to the DemoTrust.jks keystore. The other option is to change the Keystore configuration to use the Custom keystore.
   Initially, we tried to setup a custom keystore using the description from this link. The process was as follows:


  1. In Weblogic server administration, expand Servers and select the server you need to update.
  2. Select Configuration -> Keystores -> SSL.
  3. Click the Change link under Keystore Configuration.
  4. Select Custom Identity and Java Standard Trust as the keystore configuration type and continue.
  5. For the Custom Identity Keystore File Name, enter the path to your Java keystore. Select Keystore type as jks .
  6. Enter your Custom Identity Keystore Passphrase as the password you used when you created the Java keystore
  7. Confirm the password, click Continue and then Finish. 
  8. Go back under Servers and select the server that you are working with.
  9. Select Configuration -> Keystores -> SSL.
  10. Under Configure SSL, select Keystores as the method for storing identities.
  11. Enter the server certificate key alias (in this example, myalias was used), and the keystore password
  12. Click Finish to finalize the changes. You will need to reboot Weblogic for those changes to take effect.
After going with the above approach by changing the "Keystore Configuration" to "Custom Identity and Java Standard Trust" and setting all the JKS Keystores pointing to bank.jks, weblogic console gave the following error:
"weblogic.management.DeploymentException: Deployment could not be created. Deployment creator is null."

The reason for the above error turns to be that the SSL configuration is not been updated corresponding to the Keystore configuration. Hence the “SSL Configuration" was configured to use a “Custom Trust Store” and the Key Alias and Password to be used were specified. It resulted in a failure, as no request was able to hit the Weblogic server.

After learning from the above failures, we swap to the first option, i.e. add the certificate to the DemoTrust.jks. The Demo keystores are the Keystores configured in Weblogic Console by Default. The Demo keystores are configured under (Environment-> Servers-> AdminServer-> Configuration-> Keystores). The names of the Demo keystores and their default passwords are as follows:

Keystore: DemoTrust.jks
Password: DemoTrustKeyStorePassPhrase
Path:     C:\Oracle\Middleware\wlserver_10.3\server\lib

Keystore: DemoIdentity.jks
Password: DemoIdentityKeyStorePassPhrase
Path:     C:\Oracle\Middleware\wlserver_10.3\server\lib

Keystore: cacerts
Password: changeit
Path:     C:\Oracle\Middleware\jdk160_21\jre\lib\security

 All the Demo keystores for the Weblogic server are located in the path “Oracle\Middleware\wlserver_10.3\server\lib”. One could find the "DemoTrust.jks" and "DemoIdentity.jks" files here. Here we add the bank.cer ONLY TO “DemoTrust.jks” keystore and NOT TO “DemoIdentity.jks”. We DON'T ADD bank.cer to "cacerts" in located in “Oracle\Middleware\jdk160_21\jre\lib\security” folder too. The process is as follows:

1)     Add the bank.cer ONLY TO DemoTrust.jks keystore using the following command:
      keytool -importcert -alias bank: BANK -file bank.cer -keystore DemoTrust.jks
      Enter keystore password: DemoTrustKeyStorePassPhrase

        Trust this certificate? [no]: yes
        Certificate was added to keystore

2)    We confirm if the keys are added into the DemoTrust.jks by the following command:
        keytool -list -keystore DemoTrust.jks 

All the server logs can be found in the following log file:
“Oracle\Middleware\user_projects\domains\base_domain\servers\AdminServer\logs\base_domain.log”.

WS Security with BST Client using SOAP-UI


Open the SOAP-UI and create a new project based on the WSDL or Endpoint provided.  In order to set WS Security for the SOAP-UI client, right click on the project created and select “Show Project View” from the Menu.


Select the “WS-Security Configurations” tab and select the “Keystores/Certificates” tab in the inner window.  Then click on the “+” button to select the new Keystore and enter the Keystore password.




Then Select the “Outgoing WS-Security Configurations” tab in the inner window.  Click the add button from the top section to add a new Configuration in the outgoing WS-Security configurations.  Fill in the Default Username/Alias and password to be used in all the WSS Actions.  Now in the bottom section click the “+” button to add a Timestamp Entry. Fill the “Time to Live” as 1800000 and check the option to set the Millisecond Precision of the Timestamp.
      Moving forward, add the second WSS Entry “Signature” which will be creating the Binary Security Token. Select the Keystore which is been entered in the “Keystores/Certificates” section and enter the Alias name with the corresponding password. Select the Key Identifier Type as “Binary Security Token” in order to create the Binary Security Token  first. Select the signature algorithm, canonicalization algorithm and the digest algorithm.  At last check “Use single certificate for signing” in order to use only the base certificate and not all the certificates in the chain. The “Parts” section is kept empty, but by default SOAP-UI will sign the “Body” element using the generated BinarySecurityToken.





Moving forward, add another Signature WSS entry, the third one of all. Similar to the previous Signature configuration, select the keystore, enter alias and password, and select the same entries for the algorithms as before. Most importantly for the Key Identifier Type select “Issuer Name and Serial Number” in order to sign all the elements. The “Use single certificate for signing” option remains unchecked as all the certificates in the chain should be used for signing.  Unlike before, use the “+” button near the “Parts” section to add the “Timestamp”, “Body” and “BinarySecurityToken” elements with the namespace and encoding information (Default its “Content”).


Now the project is WS Security enabled for the Requests. But before firing the individual requests, select the Request Method and under project properties make sure that “Strip whitespaces” Property is set to “true”.


Then click the “Authentication and Security-related settings” for the Request at the bottom causing a window being opened. Select the Outgoing WSS as the same name given in the “Outgoing Security Configurations” section before.



This can also be done by right clicking the request and using the menu to select the “Outgoing WSS” to the corresponding Outgoing WS Security configuration. Mostly the prior method is preferred.


The Resulting SOAP-UI Request is as follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
<soapenv:envelope xmlns:soapenv="http://..." xmlns:ws="http://...">
   <soapenv:header>
     <wsse:security xmlns:wsse="http://...">
       <ds:signature id="Signature-11" xmlns:ds="http://...">
         <ds:signedinfo>
         <ds:canonicalizationmethod algorithm="http://.../xml-exc-c14n#">
         <ds:signaturemethod algorithm="http://.../xmldsig#rsa-sha1">
         <ds:reference uri="#Timestamp-8">
           <ds:transforms>
             <ds:transform algorithm="http://.../xml-exc-c14n#">
           </ds:transform></ds:transforms>
           <ds:digestmethod algorithm="http://.../xmldsig#sha1">
           <ds:digestvalue>0up9O5yZ6wLnau/eTzPZtfz+IIM=</ds:digestvalue>
         </ds:digestmethod></ds:reference>
         <ds:reference uri="#id-10">
           <ds:transforms>
             <ds:transform algorithm="http://.../xml-exc-c14n#">
           </ds:transform></ds:transforms>
           <ds:digestmethod algorithm="http://.../xmldsig#sha1">
           <ds:digestvalue>EAuvZTemCXTia8fPYXngIZOCPE0=</ds:digestvalue>
         </ds:digestmethod></ds:reference>
         <ds:reference uri="#CertId-2B6B2C4066C46E9954132989807937513">
           <ds:transforms>
             <ds:transform algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
           </ds:transform></ds:transforms>
           <ds:digestmethod algorithm="http://www.w3.org/2000/09/xmldsig#sha1">
           <ds:digestvalue>kYMlR5YhU9CHpVaL0uCVnxINNF0=</ds:digestvalue>
         </ds:digestmethod></ds:reference>
         </ds:signaturemethod></ds:canonicalizationmethod></ds:signedinfo>
         <ds:signaturevalue>FSSax.....CWtoxx0=</ds:signaturevalue>
         <ds:keyinfo id="KeyId-2B6B2C4066C46E9954132989807940317">
        <wsse:securitytokenreference wsu:id="STRId-0318" xmlns:wsu="http://...">
         <ds:x509data>
           <ds:x509issuerserial>
              <ds:x509issuername>CN=BANK,OU=BANK,O=BANK,L=SG,ST=SG,C=SG </ds:x509issuername>
              <ds:x509serialnumber>1329894156</ds:x509serialnumber>
           </ds:x509issuerserial>
       </ds:x509data>
     </wsse:securitytokenreference>
     </ds:keyinfo>
   </ds:signature>
   <wsse:binarysecuritytoken encodingtype="http://...#Base64Binary" valuetype="http://...#X509v3" wsu:id="CertId-7513" xmlns:wsu="http://...">
           l4TLCUURhrJbRjXEIEGirTpg==
   </wsse:binarysecuritytoken>
   <ds:signature id="Signature-9" xmlns:ds="http://.../xmldsig#">
     <ds:signedinfo>
       <ds:canonicalizationmethod algorithm="http://.../xml-exc-c14n#">
       <ds:signaturemethod algorithm="http://.../xmldsig#rsa-sha1">
       <ds:reference uri="#id-10">
         <ds:transforms>
           <ds:transform algorithm="http://.../xml-exc-c14n#">
      </ds:transform></ds:transforms>
         <ds:digestmethod algorithm="http://.../xmldsig#sha1">
         <ds:digestvalue>EAuvZTemCXTia8fPYXngIZOCPE0=</ds:digestvalue>
       </ds:digestmethod></ds:reference>
     </ds:signaturemethod></ds:canonicalizationmethod></ds:signedinfo>
     <ds:signaturevalue>TmjGBLZJ69kHZNG8=</ds:signaturevalue>
     <ds:keyinfo id="KeyId-2B6B2C4066C46E9954132989807937514">
        <wsse:securitytokenreference wsu:id="STRId-515" xmlns:wsu="http://...">
          <wsse:reference uri="#CertId-7513" valuetype="http://...#X509v3">
        </wsse:reference></wsse:securitytokenreference>
    </ds:keyinfo>
</ds:signature>
<wsu:timestamp wsu:id="Timestamp-8" xmlns:wsu="http://...">
    <wsu:created>2012-02-22T08:07:59.352Z</wsu:created>
    <wsu:expires>2012-03-14T04:07:59.352Z</wsu:expires>
 </wsu:timestamp>
 </wsse:security>
 </soapenv:header>
 <soapenv:body wsu:id="id-10" xmlns:wsu="http://...">
      <ws:checkeligibilityrequest>
            <msisdn>1222</msisdn>
            <mnoid>232</mnoid>
            <servicename>CheckEligibility</servicename>
      </ws:checkeligibilityrequest>
 </soapenv:body>
</soapenv:envelope>

From the above request format received from SOAP-UI for the WS Security enabled Server, we could point out some of the key things. First, the Security Header inside the Soap Header contains the following elements:
  1. Signature 1
  2. BinarySecurityToken
  3. Signature 2
  4. Timestamp
(highlighted in Blue above) while Signature 2 consists of <ds:KeyInfo> Element.
       In Signature 1 we find the <ds:X509Data> Element (highlighted in Blue) inside the <wsse:SecurityTokenReference> element. The <ds:X509Data> Element contains the <ds:X509IssuerSerial> element.  From its name it suggests that this is signed by the IssuerSerial KeyIdentifier. Also in Singature 1 element we find three <ds:Reference> elements assumed to be signatures (from the order of Signature Parts specified in SOAP-UI) as follows:
  1. TIMESTAMP                       : <ds:Reference URI="#Timestamp-8">
  2. BODY                                   : <ds:Reference URI="#id-10">
  3. BINARYSECURITYTOKEN:  <ds:Reference URI="#CertId-2B6B2C4066C46E9954132989807937513">
    In Signature 2 on the other hand we see just the <wsse:Reference> element inside the <wsse:SecurityTokenReference> element. The  <wsse:Reference> element has the ValueType as “X509v3” which suggests that this is signed by the BinarySecurityToken. Even though we didn’t specify any values for “Parts” section in the first Signature using BinarySecurityToken as KeyIdentifier, we see one <ds:Reference URI="#id-10"> element assumed to be a signature.  Comparing the URI of the Reference element with Signature 1 element signatures, we assume that it is the Signature of the Body Element. Hence even if the Signature Parts is empty, by default the Body element is signed by Default using the specified KeyIdentifier.


Create CXF Client to Send Request with BST


    One of a Senior developer Xei Songwen provided an implementation of WS Security using which just signed Body element to send the request. The classes contained a Dispatcher, Client, Customized WSS4JOutInterceptor implementation, PasswordCallback, SigningCheck.properties and the Spring configuration described in the Class Diagram.


The following Jar Issues were faced and resolved while testing the application initially:

ERROR:
Caused by: java.lang.NoClassDefFoundError: org.apache.axiom.soap.impl.dom.soap11.SOAP11Factory
       at org.apache.axis2.saaj.SOAPPartImpl.<init>(SOAPPartImpl.java:209)
       at org.apache.axis2.saaj.SOAPPartImpl.<init>(SOAPPartImpl.java:246)

ADDED:  saaj-impl-1.3.2.jar
REMOVED: axis2-saaj-1.4.jar

ERROR:
Caused by: java.lang.NoClassDefFoundError: com.sun.org.apache.xerces.internal.dom.DocumentImpl
       at java.lang.ClassLoader.defineClassImpl(Native Method)
       at java.lang.ClassLoader.defineClass(ClassLoader.java:223)

ADDED:  xercesImpl-sun-version.jar

ERROR:
Caused by: java.lang.IncompatibleClassChangeError
       at org.apache.xalan.transformer.TransformerIdentityImpl.createResultContentHandler(TransformerIdentityImpl.java:207)
       at org.apache.xalan.transformer.TransformerIdentityImpl.transform(TransformerIdentityImpl.java:330)
       at com.sun.xml.messaging.saaj.util.transform.EfficientStreamingTransformer.transform(EfficientStreamingTransformer.java:423)
       at com.sun.xml.messaging.saaj.soap.EnvelopeFactory.createEnvelope(EnvelopeFactory.java:136)
       at com.sun.xml.messaging.saaj.soap.ver1_1.SOAPPart1_1Impl.createEnvelopeFromSource(SOAPPart1_1Impl.java:102)
       at com.sun.xml.messaging.saaj.soap.SOAPPartImpl.getEnvelope(SOAPPartImpl.java:156)
       at com.sun.xml.messaging.saaj.soap.MessageImpl.getSOAPBody(MessageImpl.java:1287)
       at

ADDED:  saaj-api-1.3.2.jar

The spring configuration for the WSS4JOutInterceptor is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
      <bean class="com.bank.ebusiness….wssecurity.MSMWSS4JOutInterceptor" id="wss4jOutConfiguration">
              <property name="encrptionKeyStoreName" value="keyStoreName"></property>
              <property name="properties">
              <map>
                <entry key="action" value="Timestamp Signature">
                <entry key="timeToLive" value="1800000">
                <entry key="user" value="keyalias">
                <entry key="signaturePropFile" value="SignatureSigning.properties">
                <entry>
                            <key>
                                    <value>passwordCallbackRef</value>
                           </key>
                          <ref bean="passwordCallback">
               </ref></entry>
               <entry key="useSingleCertificate" value="false">
               <entry key="signatureKeyIdentifier" value="IssuerSerial">  <!—“DirectReference” -->
               <entry key="signatureAlgorithm" value="http://www.w3.org/2000/09/xmldsig#rsa-sha1">
               <entry key="signatureDigestAlgorithm" value="http://www.w3.org/2000/09/xmldsig#sha1">
               <entry key="signatureCanonicalizationAlgorithm" value="http://www.w3.org/2001/10/xml-exc-c14n#">
               <entry key="signatureParts" value="{Element}{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd}Timestamp;
 
              </entry></entry></entry></entry></entry></entry></entry></entry></entry></entry></map>
              </property>
      </bean>


When tried to use the KeyIdentifier as “DirectReference” or “IssuerSerial” in the Single WSS4JOutInterceptor and specified BinarySecurityToken element in the “signatureparts” as specified above, it gave the following error:

Caused by: org.apache.ws.security.WSSecurityException: General security error (WSEncryptBody/WSSignEnvelope: Element to encrypt/sign not found: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd, BinarySecurityToken)
at org.apache.ws.security.message.WSSecSignature.addReferencesToSign(WSSecSignature.java:588)
 at org.apache.ws.security.message.WSSecSignature.build(WSSecSignature.java:769)
 at org.apache.ws.security.action.SignatureAction.execute(SignatureAction.java:57)

In order to tackle the problem of missing BinarySecurityToken element in the SecurityHeader before the Interceptor tries to sign the BST (BinarySecurityToken) element,  BST element is added before the request is passed to the inoke() method. The code added is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
 
final SOAPFactory sf = SOAPFactory.newInstance();
final SOAPElement securityElement = sf.createElement("Security", "wsse", XSD_WSSE);
final SOAPElement authElement = sf.createElement("BinarySecurityToken", "wsse", XSD_WSSE);
authElement.setAttribute("EncodingType", " http://.....1.0#Base64Binary");
authElement.setAttribute("ValueType", "http://.....1.0#X509v3");
authElement.setAttribute("wsu:Id", "CertId-CA440EE13ADE87BAE5133044746778913");
authElement.addAttribute(new QName("xmlns:wsu"), XMLNS_WSU);
authElement.addTextNode("SMDFhdffIUSDFJL9090ddf213asdsKFHkfdfgjfs234gbhfg56icxdd24rgd"));
securityElement.addChildElement(authElement);
soapRequest.getSOAPHeader().addChildElement(securityHeader);


But instead of detecting the BST element and trying to sign it, the WSS4JOutInterceptor throws the following exception:

org.apache.xmlbeans.XmlException: error: Attribute "Id" bound to namespace "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" was already specified for element "wsse:BinarySecurityToken".

Considering the suggestion given from the CXF Forum two interceptors extending WSS4JOutInterceptor were configured. The first one is configured with the KeyIdentifier as “DirectReference”, while the second one configured as “IssuerSerial”. Now the BinartSecurityToken (BST) was generated but nothing was signed. Also the Issuer Serial along with Timestamp elements was absent in the SecurityHeader. On reversing the KeyIdentifier values across the two interceptors, BinarySecurityToken vanished but all the previously missing elements reappeared.  This lead to a suspicion that only the first interceptor was being called while the second interceptor remained unexecuted.
      After running the debugger numerous times, the doubt was confirmed. As mentioned by somebody in the forum that the instance names of the two interceptors along with their class names should be different in order for them to be executed. But still success remained far off. One doubt still pondered that both the interceptors extend the WSS4JOutInterceptor for all their functionality with just the Class name different.
    After looking at the source code of org.apache.cxf.ws.security.wss4j.WSS4JOutInterceptor below, it seems to be a possibility that getId() method of the WSS4JOutInterceptorInternal Class is called before calling the handleMessage() method of the inner class. This handleMessage() method (line 257) in turn calls the doSenderAction() method defined in the org.apache.ws.security.handler.WSHandler Class.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
public class  WSS4JOutInterceptor extends AbstractWSS4JInterceptor {
       ...................................
 
   private WSS4JOutInterceptorInternal ending;
 
   public  WSS4JOutInterceptor() {
         super();
         setPhase(Phase.PRE_PROTOCOL);
         getAfter().add(SAAJOutInterceptor.class.getName());
         ending = createEndingInterceptor();
   }
 
       ...................................
   final class  WSS4JOutInterceptorInternal implements PhaseInterceptor<soapmessage> {
            ...................................
     public void  handleMessage(SoapMessage mc) throws Fault { ………….
           doSenderAction(doAction, doc, reqData, actions, somebooleanvalue);
     }
            ...................................
     public String  getId() {
 
            return WSS4JOutInterceptorInternal.class.getName();
     }
            ...................................
   }
}
</soapmessage>


If the getId() method of the WSS4JOutInterceptorInternal Class is altered to return a different class name rather than the actual one, then following exception is thrown:

SystemErr     R javax.xml.ws.soap.SOAPFaultException: Unknown exception, internal system processing error.
SystemErr     R      at org.apache.cxf.jaxws.DispatchImpl.mapException(DispatchImpl.java:235)
SystemErr     R      at org.apache.cxf.jaxws.DispatchImpl.invoke(DispatchImpl.java:264)
SystemErr     R      at org.apache.cxf.jaxws.DispatchImpl.invoke(DispatchImpl.java:195)

When a new Interceptor (MSMBSTWSS4JOutInterceptor) imitating the same code copied from WSS4JOutInterceptor is added along with the old Interceptor (MSMWSS4JOutInterceptor) extending WSS4JOutInterceptor, then both the Interceptors are invoked one after the another. Hence the first Interceptor creates the BinarySecurityToken while the second Interceptor (extending WSS4JOutInterceptor) signs all the elements including the BinarySecurityToken created before.

The same issue of signing the BinarySecurityToken can be resolved by overriding the Apache CXF and SAAJ classes. We first override the doSenderAction() method of the WSHandler Class in the BSTWSS4JOutInterceptor implementation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
public class BSTWSS4JOutInterceptor extends WSS4JOutInterceptor {
 
 private static final String msmActionClass = “org.example.BSTSignatureAction”;
 
 protected void doSenderAction(int doAction, Document doc, RequestData reqData,
                               Vector actions, boolean isRequest){
 
     boolean mu = decodeMustUnderstand(reqData);
     ............
     ............
     for (int i = 0; i < actions.size(); i++) {
       int actionToDo = ((Integer) actions.get(i)).intValue();
       ............
       switch (actionToDo) {
       case WSConstants.UT:
       case WSConstants.ENCR:
       case WSConstants.SIGN:
       case WSConstants.ST_SIGNED:
       case WSConstants.ST_UNSIGNED:
       case WSConstants.TS:
       case WSConstants.UT_SIGN:
 
    if (isBSTEnabled && actionToDo == WSConstants.SIGN) {
   
           Action doit = null;
            
           try {
            doit = (Action) Loader.loadClass(msmActionClass).newInstance();
           } catch (Throwable t) {
               if (log.isDebugEnabled()) {
                 log.debug(t.getMessage(), t);
               }
               throw new WSSecurityException(WSSecurityException.FAILURE,
               "unableToLoadClass", new Object[] { msmActionClass }, t);
           }
 
    if(doit != null) {
  doit.execute(this, actionToDo, doc, reqData);
    }
 
  } else {
  wssConfig.getAction(actionToDo).execute(this, actionToDo, doc, reqData);
   }
 
         break;
 
       case WSConstants.NO_SERIALIZE:
                reqData.setNoSerialization(true);
                break;
       default:
                Action doit = null;    
     ............
     ............
 }
}


Now in the overridden BSTSignatureAction class we override the implementation of the execute() method inorder to change the WSSecSignature class to BSTWSSecSignature class as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public class BSTSignatureAction implements Action {
 
 public void execute(WSHandler handler, int actionToDo, Document doc,
                     RequestData reqData){
 
     String password = handler.getPassword(...).getPassword();
     BSTWSSecSignature wsSign = new BSTWSSecSignature();
     ............
     ............
     ............
     try {
      wsSign.build(doc, reqData.getSigCrypto(), reqData.getSecHeader());
      reqData.getSignatureValues().add(wsSign.getSignatureValue());
     }
     catch() { ... }
 }
}


At last we override the Now in the overridden BSTSignatureAction class we override the implementation of the execute() method inorder to change the WSSecSignature class to BSTWSSecSignature class as follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
public class BSTWSSecSignature extends WSSecBase {
 
............
 public Document build(Document doc, Crypto cr, WSSecHeader secHeader)
                 throws WSSecurityException{
 
     ............
     // call addBST() method, a duplicate of prepare() method were keyIdentifierType is
     // considered only as BST_DIRECT_REFERENCE in its switch case.
     addBST(doc, cr, secHeader);
         
     // create an empty vector for signature parts
     Vector<wsencryptionpart> bstparts = new Vector<wsencryptionpart>();
         
     if(parts != null) {
        for (WSEncryptionPart part : (Vector<wsencryptionpart>)parts)
        {
          if(part.getName().equalsIgnoreCase("Body")) {
           // add the BODY element as by default if signature parts is empty
           // it signs the BODY element.
            bstparts.add(part);
          }
        }
     }
         
     // add the empty signature to the Security Header
     addReferencesToSign(bstparts, secHeader);
     // prepend the signature at the top of the Security Header
     prependToHeader(secHeader);
     // compute the digest values for the BODY element signature using the
     // BinarySecurityToken
     computeSignature();
         
     if (bstToken != null) {
        // prepend the BinarySecurityToken element at the top of the signature in
        // Security Header
        prependBSTElementToHeader(secHeader);
     }
         
     // continue with the normal process of signing and adding IssuerSerial signatures
     prepare(doc, cr, secHeader);
     SOAPConstants soapConstants =
     WSSecurityUtil.getSOAPConstants(doc.getDocumentElement());
 
     if (parts == null) {
        parts = new Vector();
        WSEncryptionPart encP =
            new WSEncryptionPart(
                soapConstants.getBodyQName().getLocalPart(),
                soapConstants.getEnvelopeURI(),
                "Content"
            );
        parts.add(encP);
     }
 
     addReferencesToSign(parts, secHeader);
     prependToHeader(secHeader);
     // Eliminate call to prependBSTElementToHeader() as it is called beforehand
     computeSignature();
 
     return doc;
}
     ............
}
</wsencryptionpart></wsencryptionpart></wsencryptionpart>


Receiving Response with Security Confirmation:


Initially “enableSignatureConfirmation” was set to “true” only in the wss4jInConfiguration.

1
2
3
4
5
6
7
8
9
10
11
<bean class="com.bank.ebusiness.mobile.nfc.wssecurity.MSMWSS4JInInterceptor" id="wss4jInConfiguration">
              ......................
              <property name="properties">
                      <map>
                              <entry key="action" value="Timestamp">
                               ......................
                              <entry key="enableSignatureConfirmation" value="true">
                      </entry></entry></map>
              </property>
              ......................
</bean>


This caused the following error to pop up:

0000001e SystemErr     R Caused by: org.apache.ws.security.WSSecurityException:
 WSHandler:  Check Signature confirmation: got SC element, but no matching SV
       at org.apache.ws.security.handler.WSHandler.checkSignatureConfirmation(WSHandler.java:392)
       at org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor.handleMessage(WSS4JInInterceptor.java:224)

After repeated combinations and retries it became clear that “enableSignatureConfirmation” has to be set “true” not only for the Wss4jInInterceptor but for both WSS4JOutInterceptors.  The reason predicted is that, there are two “SecurityConfirmation” elements added in the Response from the Weblogic Server. Now at the receiver end, when we enable the “enableSignatureConfirmation” entry in Wss4jInInterceptor, it tries to check for the Security Vector if there are similar two entries in order to verify the corresponding incoming two elements. As both the WSS4JOutInterceptors didn’t  enable the “enableSignatureConfirmation” entry, there are no entries in the Security Vector to verify. Hence we get the above exception.

......................
<wsse11:SecurityConfirmation>sdfsa9er8sd9f8sd9fgds</wsse11:SecurityConfirmation>
<wsse11:SecurityConfirmation>sdfsa9er8sd9f8sd9fgds</wsse11:SecurityConfirmation>
......................

1
2
3
4
5
6
7
8
9
10
11
<bean class="com.bank.ebusiness.wssecurity.MSMWSS4JOutInterceptor" id="wss4jOutConfiguration">                   
        ....................
        <property name="properties">
                <map>
                        <entry key="action" value="Timestamp">
                         ....................
                        <entry key="enableSignatureConfirmation" value="true">
                </entry></entry></map>
        </property>
        ....................
</bean>


Further when tried to alter the contents of even one of the element, the same error as below is thrown again as the contents of the SC elements don’t match with the contents in the SC Vectors.

org.apache.cxf.binding.soap.SoapFault: WSHandler: Check Signature confirmation: got a SC element, but no stored SV.

Going further when the value of the “action” entry was “Timestamp Signature” it threw the following exception:

Security processing failed (actions mismatch)
Caused by: org.apache.ws.security.WSSecurityException: An error was discovered processing the <wsse:Security> header
at org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor.handleMessage(WSS4JInInterceptor.java:290)


After debugging the source it was found that the exception originated from line number 290 of the class org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor. Following was the piece of the code:

1
2
3
4
5
6
// now check the security actions: do they match, in any order?
 
  if (!ignoreActions && !checkReceiverResultsAnyOrder(wsResult, actions)) {
      LOG.warning("Security processing failed (actions mismatch)");
      throw new WSSecurityException(WSSecurityException.INVALID_SECURITY);
  }

The call to the checkReceiverResultsAnyOrder() method returned false causing it to throw the WSSecurityException. After deeper look in the source code of the checkReceiverResultsAnyOrder() method in org.apache.ws.security.handler.WSHandler class it was found that it compares the Elements in the response with the Actions specified in the configuration entry of WSS4JInInterceptor. It checks whether the same actions are specified corresponding to the elements present in the <SecurityHeader>  of the response. But from the line highlighted in red below, <SecurityConfirmation> and <BinarySecurityToken> elements in the response doesn’t need to have the corresponding Action name in the configuration. This seems logical as the possible values for the “action” entry in the configuration are { NoSecurity , UsernameToken , UsernameTokenNoPassword , SAMLTokenUnsigned , SAMLTokenSigned , Signature , Encrypt , Timestamp , UsernameTokenSignature }.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
protected boolean checkReceiverResultsAnyOrder(Vector wsResult, Vector actions);
 
  java.util.List recordedActions = new Vector(actions.size());
 
        for (int i = 0; i < actions.size(); i++) {
  
           Integer action = (Integer)actions.get(i);
           recordedActions.add(action);
        }
  
        for (int i = 0; i < wsResult.size(); i++) {
  
          final Integer actInt = (Integer) ((WSSecurityEngineResult) wsResult
                     .get(i)).get(WSSecurityEngineResult.TAG_ACTION);
 
          int act = actInt.intValue();
 
          if (act == WSConstants.SC || act == WSConstants.BST) {
             continue;
          }
  
          if (!recordedActions.remove(actInt)) {
             return false;
          }
        }
  
         if (!recordedActions.isEmpty()) {
             return false;
         }
         return true;
     }


Now looking at the response below from the WS Security enabled Weblogic server, the possible values for the action should be corresponded with , , . But using the above information, there is no such action as “enableSignatureConfirmation” and hence we are left with only the “Timestamp” action in the WS Security configuration entry resolving the exception.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
   <s:header>
      <wsse:security s:mustunderstand="”1”" xmlns:wsse="”http://...”">
         <wsse11:securityconfirmation value="”JYhr32…”" wsu:id="”sigconf_iXi…”" xmlns:wsu="”http://...”">
         <wsse11:securityconfirmation value="”MK2krG2…”" wsu:id="”sigconf_iXi…”" xmlns:wsu="”http://...”">
         <wsu:timestamp xmlns:wsu="”http://...”">
           <wsu:created>2012-03-02T19:35:55Z</wsu:created>
           <wsu:expires>2012-03-02T19:36:55Z </wsu:expires>
         </wsu:timestamp>
      </wsse11:securityconfirmation></wsse11:securityconfirmation></wsse:security>
   </s:header>
   <s:body>
      <ns0:checkeligibilityresponse xmlns:ns0="http://...">
         <resultcode>100</resultcode>
         <resultmessage>SUCCESS</resultmessage>
      </ns0:checkeligibilityresponse>
   </s:body>
</s:envelope>


When a request is sent to the Oracle Weblogic Server at last the following error was encountered:
WSDLException (at /con:soapui-project): faultCode=INVALID_WSDL: Expected element '{http://schemas.xmlsoap.org/wsdl/}definitions' when trying to load.

On carefully looking the request sent by the Client with the one sent by SOAP-UI, it was found that the Request tag for Client Request was "<CheckEligibility>" while the Request Type of SOAP-UI was "<CheckEligibility Request>".

Sunday, August 18, 2013

Windows Commands

   Over the recent years there are many new commands introduced in windows operating systems besides the original DOS commands. These newly added commands enable us to carry out operations which are quite helpful and sophisticated. The full documentations of all the commands is available on microsoft's msdn website.


1) XCOPY:
    Following MS-DOS command copies files and directories from source to destination and "/E" creates empty directories, "/C" continues even if there is an error, "/H" includes hidden / system files, "/R" overwrites read only files in the destination, "/K" retaining the file attributes, and "/O" ownership / Access control list information, "/Y" avoiding prompting while overwriting the files.

    xcopy source destination /E /C /H /R /K /O /Y

   Following command copies files and directories from source to destination and  "/C" continuing even if there is an error, "/D" copy the file modified dates, "/S" copy files and subdirectories recursively except empty directories, "/H" include hiddern / system files

    xcopy source destination /C /D /S /H


2) ROBOCOPY
       Robocopy is the very powerful external command to copy files in windows. Following command copies all the files including the empty directories from the given source location to destination,  

    robocopy source destination   /MIR


     It is used to kill one or more tasks / processes using process id or process name. The following command terminates the process by name forcefully.
    taskkill /im processname /f

     The following command on the other hand terminates all the processes running by the use name "john".
     taskkill /F /FI "USERNAME eq john"


4) NETSTAT
     It displays active TCP connections, ports on which the computer is listening, Ethernet statistics, the IP routing table, IPv4 statistics. The following command displays the process actual file name using the "-b" option.

    netstat -b


     The remote shutdown tool enables to shutdown the local or remote computer within the network.

     Following command shuts down the computer by closing all the applications after specified time delay using "/t" option and displaying the message.
     shutdown \\computername /l /a /r /t:xx "msg" /y /c
     shutdown /l /t:120 "The computer is shutting down" /y /c

     Following command reboots "/r" the remote machine specified using "/m" option. It forces all the applications to close after a a minute delay "/t" with the reason "Application: Maintenance (Planned)" and the comment "/c" "Reconfiguring Applications" type:

     shutdown /r /m \\RemoteMachine /t 60 /c "Reconfiguring Applications" /f /d p:4:1


     Schtasks command is used to query or execute the tasks inside the Task Scheduler.

     Following command lists all the tasks present on the remote machine.
     schtasks /query /s \\RemoteMachine

     Following command lists all the tasks matching the name "MyTask" present on the remote machine.
     schtasks /query /s \\RemoteMachine  | findstr "MyTask"

     Following command runs the specified task name with the full path present on the specified remote machine.
     schtasks /run /s \\RemoteMachine /tn "\Microsoft\Windows\Tasks\MyTask"

     Similarly following command ends the specified task on the remote machine.
     schtasks /end /s \\RemoteMachine /tn "\Microsoft\Windows\Tasks\MyTask"

     Following command queries the task matching the name "\Microsoft\Windows\Tasks\MyTask" present on the remote machine. It displays advance properites of the task in a list format.
     schtasks /query /s \\RemoteMachine /tn "\Microsoft\Windows\Tasks\MyTask" /fo LIST /v

     Also we can create a new task in the task scheduler using the following command:
     schtasks /create /tn task_name       /tr "...\path\task.bat"       /sc daily              /st 10:00:00       /s \\ComputerName       /u username       /p password


7) SC:
     The SC command is used to communicate the service controller to manage windows services. It helps to create, update and delete windows service using various options which run as background processes. Note that all the sc command options require a space between the equals sign and the value.

     Following command creates a new window service with the specified name and run the executable specified along with the binpath option.
     sc create "servicename" binpath= "C:\Windows\System32\sample.exe" DisplayName= "Sample Service" start= auto

     Following command delete the windows service with the specified name.
     sc delete servicename

     Below command lists all the windows services on the command line.
     sc queryex type= service state= all | find "_NAME"

     Alternatively following service commands can be used to start/stop windows services:
     Start a service:       net startservice
     Stop a service:       net stopservice
     Pause a service:     net pauseservice
     Resume a service:  net continueservice


8) WMIC:
      The WMIC command provides a command line interface to Windows Management Instrumentation (WMI). WMI is the infrastructure to handle data and operations of the windows operating system and enables to carry out administrative tasks using WMI scripts.
   
     Following command gives the hardware architecture details of the CPU of the current machine
     wmic cpu get caption

     Below command provides the information regarding the current Windows OS architecture, primarily 32/64 bit system.
     wmic OS get OSArchitecture


9) PSEXEC:
     This is a utility tool which allows us to execute commands on the remote machines redirecting the remote console output to our local system. There are many other advance usages of the tool.

     psexec \\ComputerName cmd

8) NET USE:
      The NET USE command enables to connect or disconnect a computer computer from a shared resource, or to display information about computer connections. The below command assigns the disk drive Z: to the shared directory on \\zdshare

     net use Z: \\zdshare\IT\deploy

     The below command disconnects the Z drive from the \\zdshare directory.

     net use Z: /delete

     Help Option: Use the "/?" option to display the help for the command

     net use /?

8) FINDSTR:
      The FINDSTR command is used to search for patterns of text in files using regular expressions. Find the specified text "APC" with /c as a literal search string with non case-sensitive search. Also repeat the search for zero or more occurrences of previous character or class.

     findstr /i /c:"APC" *



Saturday, August 17, 2013

Test Driven Development

Test Driven Development is famous software development process which relies on the developer to write an automated test case before writing any piece of functional code. It emphasizes series of unit tests and re-factoring to provide a simple design.

   Everyone is accustomed to the general practice of software development which looks as below:
  • Design: Figure out how you're going to accomplish all the functionality.
  • Code: Type in the code that implements the design.
  • Test: Run the code a couple of times to see if it works, then hand it over to QA.

On the other hand Test Driven Development modifies this approach as below:
  • Test: Figure out what the next chunk of function is all about.
  • Code: Make it do that.
  • Design: Make it do that excellently.

As described above TDD completely inverts the accepted ordering of 'design-code-test'. So, from one view, TDD just puts the design after the test and the code. Refactoring is considered as pure design in TDD.

   In TDD world we are not allowed to figure out a complete or excellent design to get our test (and all existing tests) to pass, before we start coding it. Although there is sometimes a debate on whether there should be some kind of initial design phase were interfaces (along with methods signature) for the future classes needs to be defined. Further it is not allowed to reduce or skip the "refactor" step during the TDD development. Hence after each iteration of passing test, there should be refactoring done on the code which indirectly contributes to the design. Also once a test is written, TDD allows us to do either of the following during implementation to pass the test:
  1. Reuse some existing code
  2. Introduce meaningful new class(es) and method(s)
  3. Copy existing method(s) and change the copies
TDD helps in certain aspects of the integration, as the entire process a divided into a series of small steps. The more often we check in the code in version control system, and the smaller our changes are, the less likelihood of getting any 'merge conflicts' with others. Also every commit is a guaranteed fallback position, a piton in the rock that we can easily go back to if we slip and fall.

Below is the Red-Green-Refactor Rule for Test Driven Development:

REDWhen you write the test, you are designing the behavior you expect the code-under-test to perform.
GREENWhen you write the code to pass the test, you are designing the internal implementation of that behavior.
REFACTORYour micro-focus on getting to green probably 'un-designed' the code. When you refactor you are re-designing.




The Stepwise Premise for TDD goes as below:
   -  Can gigantic complex architectures really be created using nothing other than red-green-refactor?
   -  Consider these issues:
  • All large solutions don't just materialize out of nowhere; they are ultimately created in modest steps anyway.
  • Even if we have analysis and design phases for large-scale architectural features, we can still develop using TDD.
  • Considerable data is available to support the idea that complex global design processes frequently don't work.
  • TDD has a serious track record: it is being used all over the world to create complex systems.
Below are the commonly used TDD patterns:

Specify It
  • Essence First: What is the most basic functionality needed, not including anything fancy
  • Test First:       What exactly will we be testing? Capture that in the test method name.
  • Assert First:    What behavior would you like to check?  Writing the assert statement will lead us to produce the structure backwards by "backfilling the method" by declaring the objects and methods we need to create as well as the expected result of calling the new code.
Frame It
  • Frame First: Create whatever class(es), constructor(s) and method(s) are needed by our assert statement.
Evolve It
  • Do The Simplest Thing That Could Possibly Work: Focus on minimalism by asking oneself to program only what is absolutely necessary to pass a test.
  • Break It To Make It: Write a new test code that we know will fail because as our production code isn't capable of handling the new test.
  • Refactor Mercilessly: Make design improvements continuously, aggressively, mercilessly avoiding really bad code.
  • Test Driving:  In TDD, we don't want to stray too far from the Green Bar.

Finally, Robert Martin, one of the fanatic devotee of Test Driven Development provides the three laws of TDD in his book Clean Code as below:
  • First Law: You may not write production code until you have written a failing unit test.
  • Second Law: You may not write more of a unit test than is sufficient to fail, and not compiling is failing.
  • Third Law: You may not write more production code than is sufficient to pass the currently failing test.

Refactoring generally involves by taking an existing class that's too complex, and break it into smaller classes, each of which takes part of the old class's responsibility, and both of which work together. There are numerous advantages of refactoring the classes to smaller ones, some listed as as follows:

   1)  By making classes smaller, thus easier to grasp at one time.
   2)  By aligning the smaller classes with a well-understood functional breakdown of the underlying problem.
   3)  By making the couplings between classes mirror the couplings between functionality.
   4)  By (ultimately) allowing complex systems to be built by composing many simpler objects.
   5)  By making each smaller class easier to test.

Refactoring also involves Decremental Development, which means finding ways to shrink the code even as we continue to add new features. All the common functionality are moved as a part of library, while pre-existing libraries (core as well as external) with required implementation is searched for instead of re-inventing the wheel.


GUI Applications

In order to apply TDD on GUI applications, they need to have clear separation between user interface and operational logic most commonly achieved by MVC pattern. Although the model/view split isn't the only technique for TDD'ing GUI's, but it does represent the meta-pattern for all of them.
Following can be achieved by splitting responsibilities:
  • We can test the Model by having our TestCase pretend to be the View.
  • The most important interactions are on the Model, enabling to test core functionality.
  • We can use fake domain objects for testing which are in turn are used by the Model.
  • We can test the View by creating a fake Model and driving it that way.
  • The View can be tested by driving the window's programmatically.

A lot of enhancements can be applied to the Model-View split further such as follows:
 - Add Publisher-Subscriber to allow multiple Views on the same Model.
 - Add a Controller class to translate View-gestures into Model-commands.
 - Add a Command system to isolate and manipulate individual commands.


Test Driven Development Shortcomings

TDD is a development process which assures quality by enforcing unit tests. Although the quality of the code mainly depends on the quality of tests, not when the tests are written during development or how many lines are covered. The essential purpose for writing unit tests is to reduce the possibly of defects in the development phase itself and provide a set of automated tests to validate future changes without introducing new defects. Although such approach is greatly beneficial, the question raised often is to what extent should the tests be written ? When does this approach looses efficiency over the value of auto-tested code ? Does this provide optimal solution to the complex process of software development and unforeseen defects. Is the time and effort spent in writing unit tests to prevent and decrease defects the best approach ?

Most of the Unit Testing tutorials, TDD books and sites describe the approach with basic examples such as processing students grades, calculating wages etc. Although it does gives us a perspective and seems to make the approach by far the best one, but when applied in the co-operate world, such approach has some inherent issues listed as below:

1) Testing a piece of code completely, may involve huge number of scenarios to be considered. Even to select the subset of critical cases and write the test cases for them, it involves almost similar effort as writing the original functional code. But even after selecting a subset of critical cases, we still open ourselves to the possible defects occurring from the ignored scenarios. How to decide which cases are critical and which should be ignored. Some cases may be ignored before, but considering the entire system, such cases could lead to vital failures. Hypothetically, even if we painstakingly compile all the critical cases and wrote unit tests for the entire application, we cannot be sure that there wouldn't be any defects coming up from the unit tested code. Often times, the unit tests validate obvious scenarios (mostly by replicating the code/object in unit test or verifying if the method does get called) thus providing us with a false sense of security. This mostly is caused when the same person writes both the test and the code.

2) Compared to most of the unit testing examples in tutorials, books and articles, the professional code is not that simple or straight forward to isolate. Many real world systems involves, file handling, calling external services, databases, invoking external processes and multi-threading operations. The outcome of these operations is hard to predict. We cannot comprehend the possible values returned by the external services, or by the database all the times. Some of the scenarios such as concurrent operations, server timeout, etc are difficult to recreate in unit test environment. Even if a unit test could be written to check the handling of possible service failures, it would require a substantial amount of efforts compared to manual or integration testing.

3) The basic premise of TDD is that the test drives the system design and implementation. Hence if the line of code cannot be tested then it shouldn't have be written at all. Sometimes due to the limitations of Unit Testing tools such as Junit, Mockito and others the unit test cannot isolately test a certain piece of code. Static methods is one of such cases were despite using Powermock there are many questions raised over the effectiveness of those tests. Also private class fields/methods mostly tend to be changed to lower access modifiers to facilitate unit testing as far as Junit is concerned. Concerns are also raised about the use of Mockito's InjectMocks in unit tests and recommended to use constructor based auto-wiring instead of setter or field based auto-wiring. This ultimately restricts the usage of some features of the programming language or the frameworks inside the boundaries of testability often tagged as bad design.

5) As mentioned previously by Robert Martin, no production code should be written without the corresponding failing test. This totally ignores the fact that whether the unit test is effective, productive and valuable in catching issues. Further it blurs the line between writing a unit test on the behavior/functionality of the code rather than mapping each line of production code with the corresponding unit test. For example creating a new object, setting values to an object, non-conditional calls to library's void methods, logging etc sure compounds to numerous lines of production code, but they hardly articulate any logic or behavior. Consider the following code below:

1
2
3
Properties properties = new Properties();
properties.setProperty("key", "value");
properties.store(new FileOutputStream("C:/test.properties"), null);

The above code creates a Properties object and uses built-in store method of API to create properties file without any conditional logic. There could be many what if arguments made such as what if the store method is not called or file path is incorrect, or properties are not set or incorrectly set etc which often is a slippery slope. But mandating the existence of a line of code or their order is not the purpose of unit test, but is to make sure an independent chunk of code behaves as intended. Any piece of code which only has a single logical flow and returns same or similar results no matter the input has no concrete behavior. Further, if the code does not provide any behavior by itself or relies on external library methods for its behavior then unit testing such code not only adds to overhead and maintenance but fails to provide any productive feedback to detect real problems.
    Further, mandating TDD during a proof of concept or trial and error to fix a known problem not only increases the development overhead exponentially but also distracts the developer from the core task/problem.

4) Someone has said "the line of code that is fastest to write, that never breaks, that doesn't need maintenance is the line you never have to write". In Test Driven Development, as the unit test drives the development (rather than us choosing the critical methods to unit test), there is a lot more test code involved. Multiple scenarios for the given piece of code may encourage duplicate code unless only a single person works on it. In the co-operate projects such big chunks of test code adds up to the maintenance of the system. Badly written unit tests which often involves hardcoded error strings further consume time/effort to maintain. Fragile tests which generate false failures mostly tend to be ignored even in case of valid errors. Modifying the existing functionality using TDD becomes quite challenging as we need to deal with a mesh of interconnected mock objects and a series of test cases.

 Finally the root issue with TDD is not the effort or time required to write them, but their value compared to the effort i.e. Developer Productivity. TDD is much easier to be applied when the design documents dictates the classes/methods and their functionality beforehand. It also would help if all the possible test cases are listed (usually by testers) for the pre-designed classes.


Was it really Behavior Driven Development ?

Since writing this 2013 blog post, many others have joined to question the effectiveness of TDD. David Heinemeier Hansson, the creator of Ruby on Rails has described TDD as "Test-first fundamentalism is like abstinence-only sex ed: An unrealistic, ineffective morality campaign for self-loathing and shaming". After the blog post Kent Beck put forward his sarcastic defense on TDD which later was followed by conversation with Martin Fowler on whether TDD is Dead. Though the conclusion of the conversation was that TDD is valuable in some contexts, but much disagreement prevailed over the number and type of contexts in which it should be applied. Then in the DevTernity 2017 conference Ian Cooper gave a talk on "TDD, Where Did It All Go Wrong" which was promoted by Uncle Bob Martin. In the talk Cooper pointed out that TDD is being practiced incorrectly since we are focused on testing the implementation details instead of testing the system behavior. Due to this we often write more test code than implementation code. Such implementation driven tests with spaghetti of mocks makes refactoring painful, maintenance a nightmare and decreases the overall development productivity. Developers too often don't understand the intent of such tests and are unable to deduce the system behavior by reading them. Enhancements and re-designs becomes difficult as changing the implementation also requires to change the tests which is long haul process.

TDD is mainly practiced by using 'adding a new method to a class' as trigger to write a test. Such test-case per class approach fails to capture the true ethos for TDD. Adding a new class or method is not the trigger for writing tests. The trigger is implementing a requirement. Write tests to cover the use cases or user stories, not the implementation classes or methods. The system under test is not a class but the exports from a module or its facade. The 'unit' of 'unit testing' here really means module, not a class. A class by itself can be the facade, but many classes are implementation details of the module. Do not write tests for implementation details, these change. Write tests only against the stable contract of the (public) API (which can be within a module). Ian Cooper referenced the first book on TDD, "Test-driven Development: By Example" by Kent Beck and pointed out that Kent has explicitly stated that we need to be testing behavior not the implementation. On page 4 of the book Kent writes "What behavior will we need to produce the revised report? Put another way, what set of tests, when passed, will demonstrate the presence of code we are confident will compute the report correctly ?", which clearly refers to test over behavior not implementation. Kent further states that "When we write a test, we imagine the perfect interface for our operation. We are telling ourselves a story about how the operation will look from the outside. Our story won't always come true, but its better to start from the best-possible application program interface (API) and work backward than to make things complicated, ugly, and 'realistic' from the get-go", which affirms testing API's not implementation methods. The tests should run in isolation from other tests, but not the system under test. The unit of isolation is not the class under test, but the tests themselves. Although tests can and should test several classes working together if that is what is needed to test the behavior. We avoid file system, database, simply because these shared fixture elements prevent us from running in isolation from other tests, or the tests become slow. But if there is no shared fixture problem (one test does not affect another) then its perfectly fine to talk to database (though in-memory) or file system in unit tests. Focusing on methods for testing creates tests which are hard to maintain and code which is difficult to refactor because implementation details are exposed to the tests. Such tests do not capture the behavior we want to preserve and becomes difficult to understand. Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure. It is the step were we improve our design/implementation, produce clean code, remove duplication, sanitize code smells and apply design patterns. During refactoring to clean code we should not write new unit tests since we are not introducing new public APIs / classes. Dependency is the key problem in software development at all scales. Dependency between the tests and the code should be eliminated by avoiding mocking. Tests should not depend on implementation details by using Mocks because changing the implementation breaks such tests.  Hence mocks should be avoided at all costs except to isolate the tests on the module boundaries (databases, external services, file systems).

Tuesday, April 9, 2013

Logging Frameworks

In any complex application comprising of several components working together, tracking failures effectively becomes more challenging. Even though the application is separated by individual components, a trace of operation is required to investigate potential failures. In such circumstances, logging individual component activities comes in handy and provides a great depth of insight over periodic operations. Logging using system.out and filewriter in Java was prevalent but now with more sophisticated frameworks available, such techniques have become a thing of the past. There are three major logging frameworks which are dominant in the java world apart from countless others. They are Log4J, Slf4J and Logback frameworks.

Java Logging API
The java logging API contains a basic set of logging capabilities in the java.util.logging package using the Logger class. The Logger actually is a hierarchy of Loggers, and a . (dot) in the hierarchy indicates a level in the hierarchy. If we get a Logger for the com.example then the logger is a child of the com Logger and the com Logger is child of the Logger for the empty String. We can configure the main logger which affects all its children. The log levels such as SEVERE, WARNING, INFO etc define the severity of a message. The Level class is used to define which messages should be written to the log. The levels OFF and ALL to turn the logging of or to log everything. Each logger can access several handlers which receives the log messages from the logger and exports it to a target file (FileHandler) or console (ConsoleHandler). Each handlers output can be configured with formatters such as SimpleFormatter to generate messages in text or XMLFormatter to generate messages in XML format. The log manager is responsible for creating and managing the logger and the maintenance of the configuration.

The logging can be configured using the log.properties file with the below sample configuration.

# Logging
handlers = java.util.logging.FileHandler, java.util.logging.ConsoleHandler.level = ALL

# File Logging
java.util.logging.FileHandler.pattern = %h/myApp.log
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.FileHandler.level = INFO

# Console Logging
java.util.logging.ConsoleHandler.level = ALL


The "-Djava.util.logging.config.file=/absolute-path/logging.properties" parameter is used to load a custom log.properties for java util logging. It works with following cases:
  • Move the file log.properties to the default package (the root folder for your sources)
  • add it directly to the classpath (just like a JAR)
  • You can specify the package in which the file is, replacing "." with "/": -Djava.util.logging.config.file=com/company/package/log.properties
  • You can specify the absolute path

The most famous way to disable all the logging for any frameworks is by setting the error output to NULL as follows:
1
2
3
4
5
6
7
8
9
10
static {
  //Windows style
  try {
      PrintStream nps = new PrintStream(new FileOutputStream("NUL:"));
      System.setErr(nps);
      System.setOut(nps);
  } catch (FileNotFoundException e) {
      e.printStackTrace();
  }
}


Log4J Framework
Log4J is the oldest of the above frameworks, and widely used due its simplicity of usage. It defines various log levels and messages. Log4j is thread safe and optimized for speed. It is based on a named logger hierarchy. It supports multiple output appenders per logger and internationalization.
Log4j is not restricted to a predefined set of facilities. Its logging behavior can be set at runtime using a configuration file. It is  designed to handle Java Exceptions from the start. Log4j uses multiple levels, namely ALL, TRACE, DEBUG, INFO, WARN, ERROR and FATAL to denote log levels. The format of the log output can be easily changed by extending the Layout class. The target of the log output as well as the writing strategy can be altered by implementations of the Appender interface. Log4j is fail-stop but it does not guarantee that each log statement will be delivered to its destination.
   Below is the sample log4j property file: log4j.properties

#suppress logging from spring and hibernate to warn
log4j.logger.org.hibernate=WARN
log4j.logger.org.springframework=WARN

# Set root logger level to DEBUG and its only appender to Appender1.
log4j.rootLogger=INFO, Appender1,Appender2
# Appender1 is set to be a ConsoleAppender.
log4j.appender.Appender1=org.apache.log4j.ConsoleAppender
log4j.appender.Appender2=org.apache.log4j.RollingFileAppender
log4j.appender.Appender2.File=sample.log
# Appender2 uses PatternLayout.
log4j.appender.Appender1.layout=org.apache.log4j.PatternLayout
log4j.appender.Appender1.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n
log4j.appender.Appender2.layout=org.apache.log4j.PatternLayout
log4j.appender.Appender2.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n

Log4j sample code is as follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
try {
      Properties props = new Properties();
      props.load(TestHTTP.class.getResourceAsStream("/log4j.properties"));
      System.out.println("props = " + props.toString());
      PropertyConfigurator.configure(props);
} catch (IOException e) {
      e.printStackTrace();
}
 
LogManager.getRootLogger().setLevel(Level.OFF);
 
// Pavan's Code
Logger log = Logger.getLogger("myApp");
log.setLevel(Level.ALL);
log.info("initializing - trying to load configuration file ...");
 
Properties preferences = new Properties();
try {
    FileInputStream configFile = new FileInputStream("/path/to/app.properties");
    preferences.load(configFile);
    LogManager.getLogManager().readConfiguration(configFile);
} catch (IOException ex)  {
    System.out.println("WARNING: Could not open configuration file");
    System.out.println("WARNING: Logging not configured (console output only)");
}
 
log.info("starting myApp");

Logback Framework
Logback framework is a successor to the log4j framework providing Slf4J Api implementation natively. Logging configuration can be provided either in xml or groovy. It provides a SiftingAppender which enables to maintain seperate the logfiles based on the user session instance and the ability to switch the loglevel for individual users. Logback automatically reloads upon configuration changes and provides a better I/O failover in case of server failure.

Logback delegates the task of writing a logging event to components called appenders.
Appenders must implement the ch.qos.logback.core.Appender interface, which contains doAppend() method which is responsible for outputting the logging events in a suitable format to the appropriate output device.

Sample configuration for logback framework is as follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
<configuration debug="false" scan="false" scanperiod="60 seconds">
 
  <statuslistener class="ch.qos.logback.core.status.OnConsoleStatusListener">
     
  <property name="EMAIL_HOST" scope="context" value="mail.dx.myserver.com">
 
  <logger additivity="false" level="${JAVA_PROJECT_LOGGER_LEVEL}" name="com.myserver.u90">
     <appender-ref ref="DebugLogSiftAppender">
        <appender-ref ref="ErrorLogSiftAppender"></appender-ref>
     </appender-ref>
  </logger>
 
  <if condition=""Devl".equalsIgnoreCase(property("EnvironmentPrefix"))">
    <then>
      <property name="DOZER_LOGGER_LEVEL" scope="context" value="off">
      <property name="SPRING_LOGGER_LEVEL" scope="context" value="off">
      <property name="HIBERNATEJPA_LOGGER_LEVEL" scope="context" value="off">
      <property name="FROM_EMAIL" scope="context" value="sample_services_devl@myserver.com">
      <property name="TO_EMAIL" scope="context" value="admin@company.com">
      <property name="JAVA_PROJECT_LOGGER_LEVEL" scope="context" value="trace">
      <property name="PERF_LOGGING_LEVEL" scope="context" value="debug">
    </property></property></property></property></property></property></property></then>
  </if>
 
  <turbofilter class="ch.qos.logback.classic.turbo.MarkerFilter">
    <marker>PERFORMANCE</marker>
    <onmatch>ALLOW</onmatch>
  </turbofilter>
 
  <appender class="ch.qos.logback.core.ConsoleAppender" name="STDOUT">
    <encoder>
      <pattern>%date [%thread] %mdc %-5level %logger %msg %n %ex</pattern>
    </encoder>
  </appender>
 
 <appender class="ch.qos.logback.classic.sift.SiftingAppender" name="ErrorLogSiftAppender">
    <discriminator class="ch.qos.logback.classic.sift.JNDIBasedContextDiscriminator">
      <defaultvalue>unknown</defaultvalue>
    </discriminator>
    <sift>
      <appender class="ch.qos.logback.core.rolling.RollingFileAppender" name="ErrorLogSiftAppender-${contextName}">
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
          <level>ERROR</level>
          <onmatch>ACCEPT</onmatch>
          <onmismatch>DENY</onmismatch>
        </filter>
        <file>${logdir}/${contextName}Error.log</file>
        <append>true</append>
        <rollingpolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
          <filenamepattern>${logdir}/${contextName}Error%d{yyyy-MM-dd}.%i.log</filenamepattern>
          <timebasedfilenamingandtriggeringpolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
            <!-- or whenever the file size reaches 100MB -->
            <maxfilesize>10MB</maxfilesize>
          </timebasedfilenamingandtriggeringpolicy>
          <!-- keep 30 days' worth of history -->
          <maxhistory>30</maxhistory>
 
          <!-- keep 30 days' worth of history -->
          <maxhistory>30</maxhistory>
        </rollingpolicy>
 
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
          <pattern>%date [%thread] %mdc %-5level %logger %msg %n</pattern>
        </encoder>
 
        <!--
          <layout class="ch.qos.logback.classic.PatternLayout">
             <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg \(%file:%line\)%n</Pattern>
          </layout>
 
          <file>server.log</File>
        -->
 
      </appender>
 
    </sift>
  </appender>
 
  <if condition=""Prod".equalsIgnoreCase(property("EnvironmentPrefix"))">
    <then>
      <logger level="${JAVA_PROJECT_LOGGER_LEVEL}" name="com.mycompany.application">
        <appender-ref ref="EmailSiftAppender">
        <appender-ref ref="ErrorLogSiftAppender">
        <appender-ref ref="DebugLogSiftAppender">
      </appender-ref></appender-ref></appender-ref></logger>
    </then>
  </if>
 
</property>
</statuslistener>
</configuration>

In any complex application

Wednesday, February 27, 2013

Maven Plugin Development


Maven carries out all its implementation using plugins making them highly significant for its operation. But often there are times when a customized plugin implementation might be needed in order to carryout some peculiar build-related tasks. Tasks especially involving jenkins build operations, or command line operations which can be better off using maven than ant scripts. Further plugins can call other plugins and create custom goals to carryout large series of operations. Hence maven plugin development comes handy in creating customized maven plugins.

A maven plugin contains a series of Mojos (goals), with each Mojo being a single Java class containing a series of annotations which tells Maven the way to generate the Plugin descriptor. Every maven plugin must implement the Mojo interface which requires the class to implement getLog(), setLog() and execute() methods. The abstract class AbstractMojo provides the default implementation of getLog() and setLog(), thus only requiring to implement the execute() method. The getLog() method can be used to access the maven logger which has methods info, debug() and error() to log at various levels. The execute() method is the entry point of the plugin execution and provides the customized build-process implementation for the maven plugin.
           The AbstractMojo implementation does require to have a @goal annotation in the class-level javadoc annotation. The goal name specified with the javadoc @goal annotation defines the maven goal name to be used along with the goal prefix in order to execute the plugin. The mojo goal can be used directly in the command line or from the POM by specifying mojo-specific configuration. The @phase annotation if specified, binds the Mojo to a particular phase of the standard build lifecycle e.g. install. It is to be noted that the phases in the maven lifecycle are not called in series from the phase name specified with the @phase annotation in the Maven Mojo. The @execute annotation can be used to specify either phase and lifecycle, or goal to be invoked before the execution of plugin implementation. When the mojo goal is invoked, it will first invoke a parallel lifecycle, ending at the given phase. If a goal is provided instead of a phase, that goal will be executed in isolation. The execution of either will not affect the current project, but instead make available the ${executedProject} expression if required. The @requireProject annotation denotes whether the plugin executes inside a project thus requiring a POM to execute or else can be executed without a POM. By default the @requireProject is set to true, thus requiring to run inside a project. The @requiresOnline annotation mandates the plugin to be executed in the online mode. The Maven Mojo API Specification provides all the available annotations in detail.
      Maven mojo class can also access maven specific parameters such as MavenSession, MavenProject, Maven etc using the maven parameter expressions "${project}", "${session}" or ${maven}. These maven model objects can be used to get the project details in the POM or alter the session to execute another project. Below is the sample maven plugin mojo, which reads another pom, creates a new maven project and alter the session to execute the new project. Also it lists the plugins present in the maven project.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
/**
 * @goal sample-task
 * @requiresProject false
 * @execute lifecycle="mvnsamplecycle" phase="generate-sources"
 */
public class SampleMojo extends AbstractMojo {
 /**
 * The Maven Session Object
 * @parameter expression="${session}"
 * @required
 * @readonly
 */
 private MavenSession session;
 
 /**
 * The maven project.
 * @parameter expression="${project}"
 * @readonly
 */
 private MavenProject project;
 
 public void execute() throws MojoExecutionException, MojoFailureException {
 
      // Create a new MavenProject instance from the pom.xml and set it as current project.
      MavenXpp3Reader mavenreader = new MavenXpp3Reader();
      File file = new File("../../pom.xml");
      FileReader reader = new FileReader(file);
      Model model = mavenreader.read(reader);
      model.setPomFile("../../pom.xml");
          
      MavenProject newProject = new MavenProject(model);
      project.setBuild(newProject.getBuild());
      project.setExecutionProject(newProject);
      project.setFile(file);
      session.setCurrentProject(newProject);
      session.setUsingPOMsFromFilesystem(true);
 
      // Create a new MavenSession instance and set it to execute the new maven project.
      ReactorManager reactorManager = new ReactorManager(session.getSortedProjects());
      MavenSession newsession = new MavenSession( session.getContainer(), session.getSettings(), session.getLocalRepository(),
      session.getEventDispatcher(), reactorManager, session.getGoals(),
      session.getExecutionRootDirectory()+ "/" + app, session.getExecutionProperties(), session.getUserProperties(), new Date());
          
      newsession.setUsingPOMsFromFilesystem(true);
      session = newsession;
      project.setParent(newProject);
      project.addProjectReference(newProject);
      project.setBasedir(new File(app));
 
      // List all the plugins in the project pom.
      List plugins = getProject().getBuildPlugins();
 
      for (Iterator iterator = plugins.iterator(); iterator.hasNext();) {
         Plugin plugin = (Plugin) iterator.next();
         if(key.equalsIgnoreCase(plugin.getKey())) {
             getLog().info("plugin = " + plugin);
         }
      }
 }
}

Below are the required dependencies for the Maven Plugin. Note that the last three dependencies along with the maven-invoker are optional and used to access the Maven Object Model with the objects, MavenSession, MavenProject etc.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
<dependencies>
    <dependency>
      <groupid>org.apache.maven</groupid>
      <artifactid>maven-plugin-api</artifactid>
      <version>2.0</version>
    </dependency>
    <dependency>
      <groupid>commons-io</groupid>
      <artifactid>commons-io</artifactid>
      <version>2.1</version>
    </dependency>
 
    <!-- Dependencies for Maven Object Model -->
    <dependency>
      <groupid>org.apache.maven.shared</groupid>
      <artifactid>maven-invoker</artifactid>
      <version>2.1.1</version>
    </dependency>
    <dependency>
      <groupid>org.codehaus.plexus</groupid>
      <artifactid>plexus-component-annotations</artifactid>
      <version>1.5.5</version>
    </dependency>
    <dependency>
      <groupid>org.codehaus.plexus</groupid>
      <artifactid>plexus-utils</artifactid>
      <version>3.0.8</version>
    </dependency>
 </dependencies>
 
 <build>
   <plugins>
     ...................................
     <plugin>
       <artifactid>maven-plugin-plugin</artifactid>
       <version>2.3</version>
       <configuration>
           <goalprefix>samples</goalprefix>
       </configuration>
     </plugin>
     ...................................
   </plugins>
</build>

Maven Lifecycle

The process of building and distributing a particular artifact (project) is defined as the Maven build lifecycle. There are three built-in build lifecycles: default, clean and site. The default lifecycle handles the project deployment, the clean lifecycle handles project cleaning, while the site lifecycle handles the creation of project's site documentation. Each of the build lifecycles is defined by a different list of build phases, wherein a build phase represents a stage in the lifecycle. The build phases listed in the lifecycle are executed sequentially to complete the build lifecycle. On executing the specified build phase in the command line, it will execute not only that build phase, but also every build phase prior to the called build phase in the lifecycle. This works for multi-module scenario too. The build phase carries out its operations by declaring goals bound to it.

A goal represents a specific task (finer than a build phase) which contributes to the building and managing of a project. It may be bound to zero or more build phases. A goal not bound to any build phase could be executed outside of the build lifecycle by direct invocation. The order of execution depends on the order in which the goal(s) and the build phase(s) are invoked. Moreover, if a goal is bound to one or more build phases, that goal will be called in all those phases. Furthermore, a build phase can also have zero or more goals bound to it. If a build phase has no goals bound to it, that build phase will not execute. But if it has one or more goals bound to it, it will execute all those goals mostly in the same order of declaration as in the POM.
Goals can be bound to a particular lifecycle phase by configuring a plugin in the project. The goals that are configured will be added to the goals already bound to the lifecycle from the selected phase. If more than one goal is bound to a particular phase, the order used is that those from the selected phase are executed first, followed by those configured in the POM. Note that the <executions> element can be used to gain more control over the order of particular goals. It can also run the same goal multiple times with different configuration if required. Separate executions can also be given an ID so that during inheritance or the application of profiles, it can be controlled whether the goal configuration is merged or turned into an additional execution. When multiple executions are given that match a particular phase, they are executed in the order specified in the POM, with inherited executions running first.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<lifecycle>
  <phase>
    <id>process-classes</id>
    <goals>
      <goal>
        <id>jcoverage:instrument</id>
      </goal>
    </goals>
  </phase>
  <!-- ... -->
  <phase>
    <id>test</id>
    <goals>
      <goal>
        <id>surefire:test</id>
        <configuration>
          <!-- This assumes this is used instead of adding a runtime classpath element, which might be a good idea -->
          <classesdirectory>${project.build.directory}/generated-classes/jcoverage</classesdirectory>
          <ignorefailures>true</ignorefailures>
        </configuration>
      </goal>
    </goals>
  </phase>
</lifecycle>

Report Plugin

Writing a Report plugin is similar to the Mojo plugin were we extend the AbstractMavenReport class instead of AbstractMojo class. The report plugin can be added to the plugins of the reporting section to generate the report with the Maven site. The goal to be executed is specified in the report tag in the reportSet section which control the execution of the goals. The methods getProject(), getOutputDirectory(), getSiteRenderer(), getDescription(), getName(), getOutputName(), getBundle() and executeReport() are required to be overridden.

Note: In order to create the report without using Doxia, e.g. via XSL transformation from some XML file, add the following method to the report Mojo:
1
2
3
public boolean isExternalReport() {
    return true;
}

Following dependencies are required for maven report plugin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<dependency>
    <groupid>org.apache.maven.reporting</groupid>
    <artifactid>maven-reporting-api</artifactid>
    <version>2.0.8</version>
</dependency>
  
<dependency>
    <groupid>org.apache.maven.reporting</groupid>
    <artifactid>maven-reporting-impl</artifactid>
    <version>2.0.4.3</version>
</dependency>
  
<dependency>
    <groupid>org.codehaus.plexus</groupid>
    <artifactid>plexus-utils</artifactid>
    <version>2.0.1</version>
</dependency>

AbstractMavenReportRenderer is used to handle the basic operations with the Doxia sink to setup the head, title and body of the html report. The renderBody method is implemented to fill in the middle of the report by using the utilities for sections and tables in Doxia. To use Doxia Sink-API we import the org.apache.maven.doxia.sink.Sink class and call the getSink() method to get its instance. Then we use the doix api as in the below example to header, title and body. The starting tag is denoted by xxx() while the end tag is denoted by xxx_() similar to html tags. The rawText() method outputs exactly specified text while the text() method adds escaping characters. The sectionning is strict which means that section level 2 must be nested in section 1 and so forth. Below sample report mojo override the required methods and provide a sample usage of Doxia API.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
public class ReportMojo extends AbstractMavenReport {
 
 /**
 * Report output directory.
 * @parameter expression="${project.reporting.outputDirectory}"
 * @required
 * @readonly
 */
 private String outputDirectory;
 
 /**
 * Maven Project Object.
 * @parameter default-value="${project}"
 * @required
 * @readonly
 */
 private MavenProject project;
  
 /**
 * Maven Report Renderer.
 * @component
 * @required
 * @readonly
 */
 private Renderer siteRenderer;
 
 protected MavenProject getProject() {
  return project;
 }
 
 protected String getOutputDirectory() {
  return outputDirectory;
 }
 
 protected Renderer getSiteRenderer() {
  return siteRenderer;
 }
 
 public String getDescription(Locale locale) {
  return getBundle(locale).getString("report.description");
 }
 
 public String getName(Locale locale) {
  return getBundle(locale).getString("report.title");
 }
 
 public String getOutputName() {
  return "sample-report";
 }
 
 private ResourceBundle getBundle(Locale locale) {
  return ResourceBundle.getBundle("sample-report", locale, this.getClass().getClassLoader());
 }
 
 @Override
 protected void executeReport(Locale locale) throws MavenReportException {
 
     Sink sink = getSink();
     sink.head();
     sink.title();
     sink.text( getBundle(locale).getString("report.title") );
     sink.title_();
     sink.head_();
    
     sink.body();
     sink.section1();
     sink.sectionTitle1();
     sink.text( String.format(getBundle(locale).getString("report.header"), version) );
     sink.sectionTitle1_();
     sink.section1_();
       
     sink.lineBreak();
 
     sink.table();
     sink.tableRow();
     sink.tableHeaderCell( );
     sink.bold();
     sink.text( "Id" );
     sink.bold_();
     sink.tableHeaderCell_();
     sink.tableRow_();
 
     sink.tableRow();
     sink.tableCell();
     sink.link( "http://some_url" );
     sink.text( "123" );
     sink.link_();
     sink.tableCell_();
     sink.tableRow_();
     sink.table_();
       
     sink.body_();
     sink.flush();
     sink.close();
 }

MultiPage Report Plugin

Often times there is a need to create maven reports with multiple pages. But the maven report plugin only provides a single instance of doxia sink to create an html page. If we try to copy the implementation of the execute() method in AbstractMavenReport class and try to loop it with different filenames then we do get the required multiple pages but it only works when the report plugin is executed directly without the maven site. The maven site plugin does not calls the execute() method but calls the actual implementation of the executeReport(Locale) method. Hence such logic does not work for the mvn site but works for direct execution of the plugin. The ReportDocumentRenderer from maven-site-plugin creates the SiteRendererSink and calls report.generate(sink,locale) which in turn calls executeReport(Locale) method. Using the createSink() method fails in such case. There is no way to create more SiteRendererSinks within the report, because those sinks are from a different classloader. Maven does provide the AbstractMavenMultiPageReport class to implement but it also does not provide any way to create multiple sink instances. After we upgrade to the maven-report-plugin 3.0 we have a new method in AbstractMavenReport class called getSinkFactory(). It allows to create new sink instances when executeReport method is called from the site-plugin which initializes the factory instance. In case of the direct execution of the multipage report plugin, the execute() method of AbstractMavenReport class has no implementation for initializing the factory method neither any setter to set the factory. Hence in such case we use the dirty hack and copy the execute method implementation in the executeReport method of the multipage report class to create a new sink instance. For accessing the getFactory method we upgrade the maven-reporting-api to 3.0 as follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<dependency>
    <groupid>org.apache.maven.reporting</groupid>
    <artifactid>maven-reporting-api</artifactid>
    <version>3.0</version>
    <exclusions>
        <exclusion>
            <groupid>org.apache.maven.doxia</groupid>
            <artifactid>doxia-sink-api</artifactid>
        </exclusion>
    </exclusions>
</dependency>
 
<dependency>
    <groupid>org.apache.maven.doxia</groupid>
    <artifactid>doxia-sink-api</artifactid>
    <version>1.3</version>
</dependency>
 
<dependency>
    <groupid>org.apache.maven.reporting</groupid>
    <artifactid>maven-reporting-impl</artifactid>
    <version>2.2</version>
</dependency>

Following code provides an overview of the implementation with an example of generating a multipage report:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
public class MultiPageReportMojo extends AbstractMavenReport {
 
  .......................
 
  /**
   * Copied implementation from {@link AbstractMavenReport}. Generates the index page and
   * report pages for all the environments. If the {@link SinkFactory} is null
   * (when invoked directly) then creates a new {@link SiteRendererSink} object using
   * {@link RenderingContext}. If the {@link SinkFactory} is not null (usually for mvn site)
   * then uses its createSink() method to create a new {@link Sink} object.
   * @see org.apache.maven.reporting.AbstractMavenReport#execute()
   */
   @Override
   protected void executeReport(Locale locale) throws MavenReportException {
 
    List<String> envList = Arrays.asList("local", "devl", "qual", "cert", "prod");
   
    // index method uses getSink() method from AbstractMavenReport class to directly access
    // the sink and render the index page.
    executeReportIndex(locale, envList);
   
    for (String env : envList) {
    
      File outputDirectory = new File( getOutputDirectory() );
      Writer writer = null;
    
      try {
     
         String filename = outputPrefix + env + ".html";
         SinkFactory factory = getSinkFactory();
 
         if(factory == null) {
      
           SiteRenderingContext siteContext = new SiteRenderingContext();
           siteContext.setDecoration( new DecorationModel() );
           siteContext.setTemplateName( "org/apache/maven/doxia/siterenderer/resources/default-site.vm" );
           siteContext.setLocale( locale );
                
           RenderingContext context = new RenderingContext( outputDirectory, filename );
 
           SiteRendererSink renderSink = new SiteRendererSink( context );
 
           // This method uses the sink instance passed for the environment to render the report page.
           executeConfigReport(locale, renderSink);
 
           renderSink.close();
 
           if ( !isExternalReport() ) { // MSHARED-204: only render Doxia sink if not an external report
                 
             outputDirectory.mkdirs();
             writer = new OutputStreamWriter( new FileOutputStream( new File( outputDirectory, filename ) ), "UTF-8" );
             getSiteRenderer().generateDocument( writer, renderSink, siteContext );
           }
         }
         else {
           Sink renderSink = factory.createSink(outputDirectory, filename);
 
           // This method uses the sink instance passed for the environment to render the report page.
           executeConfigReport(locale, renderSink);
 
           renderSink.close();
         }
      } catch (Exception e) {
         getLog().error("Report, Failed to create server-config-env: " + e.getMessage(), e);
         throw new MavenReportException(getName( Locale.ENGLISH ) + "Report, Failed to create server-config-env: "
                                                                  + e.getMessage(), e);
      } finally {
         IOUtil.close( writer );
      }
  }
 
  .......................
 
  /**
   * Renders the table header cell with the specified width and text using the specified sink instance.
   * @param sink
   *   {@link Sink} instance to render the table header cell.
   * @param width
   *   {@link String} of the table header cell.
   * @param text
   *   {@link String} in the table header cell.
   */
  protected void sinkHeaderCellText(Sink sink, String width, String text) {
 
        SinkEventAttributes attrs = new SinkEventAttributeSet();
        attrs.addAttribute(SinkEventAttributes.WIDTH, width);
        sink.tableHeaderCell(attrs);
        sink.text(text);
        sink.tableHeaderCell_();
  }
}