Thursday 27 November 2014

Certificates with SHA-1 and SunCertPathBuilderException

As SHA-1 is heading to deprecation as hashing algoritm for certificate signatures, unpleasant effects start to appear.

Our partners we need to communicate with over HTTPS have brand new certificate signed by GoDaddy Certificate Authority. Accessing their https secured site via browser does not show anything alarming.

But accessing REST endpoint hosted on same site using java HttpUrlConnection blows up with javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

What is going on?

Every internet browser comes with quite a big set of preinstalled Certificate Authorities (CA) trusted certificates, because your browser's vendors trusts them. And because they are CAs and they are trusted, then every certificate that is signed by them is trusted too.

Same story with JVM. There is truststore file inside every JVM named cacerts. In Oracle jdk1.7.0_67 there is 87 Certificate Authorities in it as they are trused by Oracle. GoDady is there too, so why that SunCertPathBuilderException? Let's examine it more closely.

Every JVM is also shipped with command line tool named keytool. Using it you can list and also modify any keystore in jks format such as cacerts Executing... (default password is changeit)

 
keytool -list -v -keystore ${JAVA_HOME}/jre/lib/security/cacerts -storepass changeit | grep -A 14 godaddy
...will print following...
Alias name: godaddyclass2ca
Creation date: 20-Jan-2005
Entry type: trustedCertEntry

Owner: OU=Go Daddy Class 2 Certification Authority, O="The Go Daddy Group, Inc.", C=US
Issuer: OU=Go Daddy Class 2 Certification Authority, O="The Go Daddy Group, Inc.", C=US
Serial number: 0
Valid from: Tue Jun 29 18:06:20 BST 2004 until: Thu Jun 29 18:06:20 BST 2034
Certificate fingerprints:
  MD5:  91:DE:06:25:AB:DA:FD:32:17:0C:BB:25:17:2A:84:67
  SHA1: 27:96:BA:E6:3F:18:01:E2:77:26:1B:A0:D7:77:70:02:8F:20:EE:E4
  SHA256: C3:84:6B:F2:4B:9E:93:CA:64:27:4C:0E:C6:7C:1E:CC:5E:02:4F:FC:AC:D2:D7:40:19:35:0E:81:FE:54:6A:E4
  Signature algorithm name: SHA1withRSA
  Version: 3
Now cmparing to certificate from HTTPS website... GoDaddyG2 cerificate

There is obvious mismatch. Apart from different certificate name, validity date range also notice that Signature Algorithm is "SHA-256 with RSA". GoDaddy's certificate in JVM is different from the one in use on website, therefore SunCertPathBuilderException.

To fix this, we need to add right (G2) GoDaddy's certificate into JVM cacert keystore. Visiting GoDaddy's certificate repository obvious candidate "GoDaddy Class 2 Certification Authority Root Certificate - G2" can be found there.

wget https://certs.godaddy.com/repository/gdroot-g2.crt
keytool -printcert -file gdroot-g2.crt
Will give us something we saw in website certificate...
Owner: CN=Go Daddy Root Certificate Authority - G2, O="GoDaddy.com, Inc.", L=Scottsdale, ST=Arizona, C=US
Issuer: CN=Go Daddy Root Certificate Authority - G2, O="GoDaddy.com, Inc.", L=Scottsdale, ST=Arizona, C=US
Serial number: 0
Valid from: Mon Aug 31 20:00:00 EDT 2009 until: Thu Dec 31 18:59:59 EST 2037
Certificate fingerprints:
  MD5:  80:3A:BC:22:C1:E6:FB:8D:9B:3B:27:4A:32:1B:9A:01
  SHA1: 47:BE:AB:C9:22:EA:E8:0E:78:78:34:62:A7:9F:45:C2:54:FD:E6:8B
  SHA256: 45:14:0B:32:47:EB:9C:C8:C5:B4:F0:D7:B5:30:91:F7:32:92:08:9E:6E:5A:63:E2:74:9D:D3:AC:A9:19:8E:DA
  Signature algorithm name: SHA256withRSA
  Version: 3

Now just import gdroot-g2.crt into JVM cacerts truststore

sudo keytool -import -alias godaddyg2ca -file gdroot-g2.cer -keystore ${JAVA_HOME}/jre/lib/security/cacerts -storepass changeit

Owner: CN=Go Daddy Root Certificate Authority - G2, O="GoDaddy.com, Inc.", L=Scottsdale, ST=Arizona, C=US
Issuer: CN=Go Daddy Root Certificate Authority - G2, O="GoDaddy.com, Inc.", L=Scottsdale, ST=Arizona, C=US
Serial number: 0
Valid from: Tue Sep 01 01:00:00 BST 2009 until: Thu Dec 31 23:59:59 GMT 2037
Certificate fingerprints:
  MD5:  80:3A:BC:22:C1:E6:FB:8D:9B:3B:27:4A:32:1B:9A:01
  SHA1: 47:BE:AB:C9:22:EA:E8:0E:78:78:34:62:A7:9F:45:C2:54:FD:E6:8B
  SHA256: 45:14:0B:32:47:EB:9C:C8:C5:B4:F0:D7:B5:30:91:F7:32:92:08:9E:6E:5A:63:E2:74:9D:D3:AC:A9:19:8E:DA
  Signature algorithm name: SHA256withRSA
  Version: 3

Extensions: 

#1: ObjectId: 2.5.29.19 Criticality=true
BasicConstraints:[
  CA:true
  PathLen:2147483647
]

#2: ObjectId: 2.5.29.15 Criticality=true
KeyUsage [
  Key_CertSign
  Crl_Sign
]

#3: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 3A 9A 85 07 10 67 28 B6   EF F6 BD 05 41 6E 20 C1  :....g(.....An .
0010: 94 DA 0F DE                                        ....
]
]

Trust this certificate? [no]:  yes
Certificate was added to keystore

Problem solved and REST call should succeed from now using this JVM

For completeness sake, if you want to get rid of it, execute

keytool -delete -alias godaddyg2ca -keystore ${JAVA_HOME}/jre/lib/security/cacerts

What in case you are not allowed to modify JVM cacerts truststore?

Then make a copy of it, import gdroot-g2.cer into it and use this custom truststore instead of default JVM truststore using -Djavax.net.ssl.trustStore=/path/to/custom_cacerts -Djavax.net.ssl.trustStorePassword=changeit java parameters

What in case you need multiple keystores or something similarily complex?

Such scenarios cannot be solved simply by using JVM switches and parameters anymore and you have to roll your own X509TrustManager implementation. Then you need to plug it into your http client connection setup - (HttpsUrlConnection SSLSocketFactory) (Apache HttpClient 3 SecureProtocolSocketFactory) (Apache HttpClient 4 SSLConnectionSocketFactory ) (Jersey SslConfigurator)

Monday 3 November 2014

Airbrake for logback

I've you've been observing this ticket for a while and it seems to be pretty ignored. Well not anymore or at least not by me. Undramatic sources of Airbrake Logback Appender are in GitHub airbrake-logback repo

Grab it from Maven central repo

<dependency>
    <groupId>net.anthavio</groupId>
    <artifactId>airbrake-logback</artifactId>
    <version>1.0.0</version>
</dependency>

Use it...well...as usual

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false" scan="true" scanPeriod="30 seconds">

    <appender name="AIRBRAKE" class="net.anthavio.airbrake.AirbrakeLogbackAppender">
        <apiKey>YOUR_AIRBRAKE_API_KEY</apiKey>
        <env>test</env>
        <enabled>true</enabled>

        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>ERROR</level>
        </filter>
    </appender>

    <root>
        <level value="info" />
        <appender-ref ref="AIRBRAKE" />
    </root>
</configuration>

Happy Logback based Airbraking

Friday 31 October 2014

Disqus java rest api client library released

After almost year long sleep in 1.0.0-SNAPSHOT I finally released Disquo project which is java client library for Disqus REST API

It is deployed into Maven central repository with following coordinates:

    <dependency>
        <groupId>net.anthavio.disquo</groupId>
        <artifactId>disquo-api</artifactId>
        <version>1.0.0</version>
    </dependency>

It covers all Disqus v3.0 api endpoints functionality and authentication modes: OAuth 2 (access_token), Single sign-on (remote_auth), Anonymous ("Guest Commenting" must be enabled for site/forum)

To get Application keys needed for Disqus API, you should to visit Disqus and Log in or Create an Account then Register new application and grab generated "Public Key" and "Secret Key" and optionaly "Access Token"

DisqusApplicationKeys keys = new DisqusApplicationKeys("...api_key...", "...secret_key...");
//Construct Disqus API client
DisqusApi disqus = new DisqusApi(keys);
DisqusResponse<List<DisqusPost>> response = disqus.posts().list(threadId, null);
List<DisqusPost> posts = response.getResponse();
for (DisqusPost post : posts) {
  String text = post.getAuthor().getName() + " posted " + post.getMessage();
  System.out.println(text);
}
disqus.close();

More examples can be found on GitHub project page. If you find a bug, please report it to GitHub issues page

Happy Disqusing!

Thursday 30 October 2014

Vaadin & Spring Boot & WebSockets & OpenShift & Java8

Spring Boot is around for a while and because Vaadin caught my interest lately, plus Spring4Vaadin appeared, I gave it a spin. To make it more challenging, I decided to use Java 8 and deploy it on OpenShift. All the code is hosted on GitHub

Application is simple chat with OAuth2 sign in using Facebook/Google/GitHub/LinkedIn/Disqus. High-tech is way it broadcasts chat messages to chat participants using server push which in turn uses WebSocket mechanism.

Some informations about Spring Boot and OpenShift are part of the Spring Boot Documentation and some more in this blog post.

I've created DIY gear using OpenShift application console but same can be done using rhc app create vinbudin -t diy-0.1. Application code was already on GitHub and to get it running on OpenShift, following steps must be performed.

1. Add git upstream repository (openshift remote)

git clone git@github.com:anthavio/vinbudin.git
git remote add openshift ssh://your_uuid@vinbudin-yourdomain.rhcloud.com/~/git/vinbudin.git/

2. Add OpenShift build hooks

Build hooks in .openshift/action_hooks are shell scripts that are executed when you push something into OpenShift remote repository. Typically they stop application, run maven/ant/sbt to build application and start it again.

3. Push to the openshift

That's it. When you execute following command, you will see build hooks executed and project will be built and deployed on your openshift gear.

git push openshift master

You can still git push origin master to GitHub repository without invoking build on OpenShift

Maven 3.2.3 and Java 8 on OpenShift

OpenShift gears (28 Oct 2014) have only Java 1.7.0_71 and Maven 3.0.5. If you want to use different, most probably newer versions, just download and unpack them in .openshift/action_hooks/deploy into ${OPENSHIFT_DATA_DIR}

WebSockets on OpenShift

Situation seems to be same as two years ago when this blog post was written. You still have to access port 8000 to get WebSockets working.

Thanks to Vaadin/Atmosphere push implementation which automaticaly downgrade to long-polling when websockets mechanism is unavailable, user experience is not affected, but observing messages exchanged between browser and server, you will easily spot the problem.

Open Chrome Developer Tool Network tab and navigate to http://vinbudin-openshift.anthavio.net/ui

Request URL:ws://vinbudin-openshift.anthavio.net/ui/PUSH/?v-uiId=0&v-csrfToken=731062c6-712d-4320-a92c-3742fe3b4451&X-Atmosphere-tracking-id=0&X-Atmosphere-Framework=2.1.5.vaadin4-jquery&X-Atmosphere-Transport=websocket&X-Atmosphere-TrackMessageSize=true&X-Cache-Date=0&Content-Type=application/json;%20charset=UTF-8&X-atmo-protocol=true
Request Method:GET
Status Code:501 Not Implemented

WebSocket upgrade request was rejected. Ouch!

Now doing same, but with port 8000 in url - http://vinbudin-openshift.anthavio.net:8000/ui

Request URL:ws://vinbudin-openshift.anthavio.net:8000/ui/PUSH/?v-uiId=0&v-csrfToken=731062c6-712d-4320-a92c-3742fe3b4451&X-Atmosphere-tracking-id=0&X-Atmosphere-Framework=2.1.5.vaadin4-jquery&X-Atmosphere-Transport=websocket&X-Atmosphere-TrackMessageSize=true&X-Cache-Date=0&Content-Type=application/json;%20charset=UTF-8&X-atmo-protocol=true
Request Method:GET
Status Code:101 Switching Protocols

WebSocket upgrade request was accepted and protocol was switched. Hurray!

Happy Vaadin & Spring Boot & OpenShift websocketing!

Tuesday 16 September 2014

Spring OAuth2RestTemplate and self-signed server certificate

It might happen to you that you ended up using spring-security-oauth2 on OAuth2 client side. Personally I would not recommend to use it as it brings much more complexity into task that is not that difficult. But every use-case is diferent.

If you also happen to integrate with site using self-signed certificate, you'll inevitably encounter following exception.

org.springframework.security.oauth2.client.resource.OAuth2AccessDeniedException: Error requesting access token.
 at org.springframework.security.oauth2.client.token.OAuth2AccessTokenSupport.retrieveToken(OAuth2AccessTokenSupport.java:144) ~[spring-security-oauth2-2.0.2.RELEASE.jar:na]
 at org.springframework.security.oauth2.client.token.grant.code.AuthorizationCodeAccessTokenProvider.obtainAccessToken(AuthorizationCodeAccessTokenProvider.java:198) ~[spring-security-oauth2-2.0.2.RELEASE.jar:na]
 at org.springframework.security.oauth2.client.OAuth2RestTemplate.acquireAccessToken(OAuth2RestTemplate.java:221) ~[spring-security-oauth2-2.0.2.RELEASE.jar:na]
 at org.springframework.security.oauth2.client.OAuth2RestTemplate.getAccessToken(OAuth2RestTemplate.java:173) ~[spring-security-oauth2-2.0.2.RELEASE.jar:na]
 at org.springframework.security.oauth2.client.OAuth2RestTemplate.createRequest(OAuth2RestTemplate.java:105) ~[spring-security-oauth2-2.0.2.RELEASE.jar:na]
 at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:538) ~[spring-web-4.0.5.RELEASE.jar:4.0.5.RELEASE]
 at org.springframework.security.oauth2.client.OAuth2RestTemplate.doExecute(OAuth2RestTemplate.java:128) ~[spring-security-oauth2-2.0.2.RELEASE.jar:na]
 at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:518) ~[spring-web-4.0.5.RELEASE.jar:4.0.5.RELEASE]
 at org.springframework.web.client.RestTemplate.getForObject(RestTemplate.java:256) ~[spring-web-4.0.5.RELEASE.jar:4.0.5.RELEASE]
...
Caused by: org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://somewhere.something.info/oauth2/token":sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target; nested exception is javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
 at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:558) ~[spring-web-4.0.5.RELEASE.jar:4.0.5.RELEASE]
 at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:511) ~[spring-web-4.0.5.RELEASE.jar:4.0.5.RELEASE]
 at org.springframework.security.oauth2.client.token.OAuth2AccessTokenSupport.retrieveToken(OAuth2AccessTokenSupport.java:136) ~[spring-security-oauth2-2.0.2.RELEASE.jar:na]
 ... 86 common frames omitted
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
 at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) ~[na:1.7.0_55]
 at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1884) ~[na:1.7.0_55]
 at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:276) ~[na:1.7.0_55]
 at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:270) ~[na:1.7.0_55]
 at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1341) ~[na:1.7.0_55]
 at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:153) ~[na:1.7.0_55]
 at sun.security.ssl.Handshaker.processLoop(Handshaker.java:868) ~[na:1.7.0_55]
 at sun.security.ssl.Handshaker.process_record(Handshaker.java:804) ~[na:1.7.0_55]
 at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1016) ~[na:1.7.0_55]
 at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1312) ~[na:1.7.0_55]
 at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1339) ~[na:1.7.0_55]
 at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1323) ~[na:1.7.0_55]
 at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:563) ~[na:1.7.0_55]
 at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185) ~[na:1.7.0_55]
 at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153) ~[na:1.7.0_55]
 at org.springframework.http.client.SimpleBufferingClientHttpRequest.executeInternal(SimpleBufferingClientHttpRequest.java:78) ~[spring-web-4.0.5.RELEASE.jar:4.0.5.RELEASE]
 at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48) ~[spring-web-4.0.5.RELEASE.jar:4.0.5.RELEASE]
 at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:52) ~[spring-web-4.0.5.RELEASE.jar:4.0.5.RELEASE]
 at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:542) ~[spring-web-4.0.5.RELEASE.jar:4.0.5.RELEASE]
 ... 88 common frames omitted
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
 at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:385) ~[na:1.7.0_55]
 at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292) ~[na:1.7.0_55]
 at sun.security.validator.Validator.validate(Validator.java:260) ~[na:1.7.0_55]
 at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:326) ~[na:1.7.0_55]
 at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:231) ~[na:1.7.0_55]
 at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:126) ~[na:1.7.0_55]
 at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1323) ~[na:1.7.0_55]
 ... 102 common frames omitted
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
 at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:196) ~[na:1.7.0_55]
 at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:268) ~[na:1.7.0_55]
 at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:380) ~[na:1.7.0_55]
 ... 108 common frames omitted

Exception is throw when https://somewhere.something.info/oauth2/token is used to trade OAuth2 code for access token. You probably know what to do. Time for stupid trick with (totaly unsecure) X509TrustManager! As OAuth2RestTemplate extends RestTemplate, it inherits public void setRequestFactory(ClientHttpRequestFactory requestFactory) method and it can be used to make the trick.

private static void disableCertificateChecks(OAuth2RestTemplate oauthTemplate) throws Exception {

        SSLContext sslContext = SSLContext.getInstance("TLS");
        sslContext.init(null, new TrustManager[] { new DumbX509TrustManager() }, null);
        ClientHttpRequestFactory requestFactory = new SSLContextRequestFactory(sslContext);

        //This is for OAuth protected resources
        oauthTemplate.setRequestFactory(requestFactory);
}

Code for SSLContextRequestFactory and DumbX509TrustManager is gisted here

Now, if you try to run through test once again, you will still get same SunCertPathBuilderException again. Why?

Answer is in hidden in bowels of OAuth2AccessTokenSupport, which is base class of AuthorizationCodeAccessTokenProvider. To cut the story short, it is creating it's own RestTemplates for token endpoint operations.

Luckily again, overridable setRequestFactory(...) is also provided on OAuth2AccessTokenSupport. Repeating same trick again finaly gives us working, exception free solution:

    private static void disableCertificateChecks(OAuth2RestTemplate oauthTemplate) throws Exception {

        SSLContext sslContext = SSLContext.getInstance("TLS");
        sslContext.init(null, new TrustManager[] { new NastyX509TrustManager() }, null);
        ClientHttpRequestFactory requestFactory = new SSLContextRequestFactory(sslContext);

        //This is for OAuth protected resources
        oauthTemplate.setRequestFactory(requestFactory);

        //AuthorizationCodeAccessTokenProvider creates it's own RestTemplate for token operations
        AuthorizationCodeAccessTokenProvider provider = new AuthorizationCodeAccessTokenProvider();
        provider.setRequestFactory(requestFactory);
        oauthTemplate.setAccessTokenProvider(provider);
    }

Remember, you've just created huge security hole in your application. Make doubly sure it is never used in production

Happy unsecure https REST OAuth2 calls!

PS: if you favour more advanced http transport layer then basic Java HttpURLConnection (and you should on server side) like HttpComponents 4.3, then simplest possible ClientHttpRequestFactory creation would be:

        SSLContext sslContext = new SSLContextBuilder().loadTrustMaterial(null, new TrustSelfSignedStrategy()).build();
        CloseableHttpClient httpClient = HttpClients.custom().setSslcontext(sslContext).build();
        ClientHttpRequestFactory requestFactory = new HttpComponentsClientHttpRequestFactory(httpClient);

Wednesday 4 June 2014

Netflix OSS github repo is full of goodies

If you haven't browse through Netflix OSS github repositories yet, you should do it immediately. It would be very surprising if you could not find something useful there.

I've found three projects (Feign, Hystrix, Archaius) that address directly the same problem that I've been building similar library to solve. Many others are inspirational or interesting at least, like RxJava quite popular in "reactive circles".

Some of them may have (better?) alternatives like Retrofit, some other are used as bricks in interesting projects like Halfpipe.

Apparently everybody is going crazy about Spring Boot. Let's hope it will not share destiny with Spring Roo.

Tuesday 29 April 2014

Fluent Builder method ordering

Classic simple fluent builder usually suffers from some annoyances.

Let's look a this example:
ComplexClass cc = ComplexClass.Builder()
  .addThis(42).setThat("I'm that").addSomethingOther("I'm other")
  .addYetAnother("yet yet yet").mixPudding(true).setChickenFeedingDevice(device)
  .addThis(99).withTimeout(5000).setThat("I'm another that").build()
Disregarding silly and inconsistent method naming...
  • large number of builder methods confuse user
  • builder method can be mistakenly called multiple times

Having way to enforce order into method chaining will allow to build something more like wizzard or workflow which wiil simplify Builder usage greatly.

Let's introduce some interfaces according following rules
  • Any interface declares only subset of Builder methods
  • Interface method return value is another interface instead of Builder instance
  • Builder itself implements all interfaces

Together this basicaly forms very simple example of formal grammar where order in interface chaining represents production rules

To demonstrate idea just described, I built Selenium2/WebDriver WaitBuilder. It uses WbBegin interface as initial building point, WbAnd interface allowing to add multiple multiple conditions and finaly WbEnd with .seconds(int seconds) method instead of traditional .build().

Selenium has WebDriverWait class allowing conditional waits, which is very useful for testing pages, where elements appear dynamicaly or to perform assertion of Post/Redirect/Get (redirect after form submission) in time-boxed manner. SeleniumWaitBuilder allows to combine multiple conditions together.

It enables to write such cool chains such as...
//pass test if "results-table" element will appear in 5 seconds, fail otherwise
SeleniumWaitBuilder.with(driver)
 .passOn().element(By.id("results-table")).seconds(5);

//pass test if title become "Example Domain" and in 5 seconds or fail immediately if it happen to contain "error" string
SeleniumWaitBuilder.with(driver).passOn()
  .title().equals("Example Domain")
 .and().failOn()
  .title().contains("error")
 .seconds(5);

SeleniumWaitBuilder.with(driver).passOn()
  .title().endsWith("Example Domain")
  .element(By.id("result-table"))
 .and().failOn()
  .title().contains("error")
  .url().contains("/500")
 .seconds(5);

And finally, hero of today's blog post - mighty SeleniumWaitBuilder itself!

I admit that this is overkill for most of the builders but still, it is neat...

Happy contitional waiting!

Wednesday 23 April 2014

Java cloud hosted continuous integration services

Last year I resigned to maintain my own VPS hosted CI server with Git Repos, Jenkins, Sonar, Maven repository, LDAP, etc... and moved source code repositories to GitHub. I'm uploading build artifacts into Sonatype OSSRH in case of libraries and with web applications deployments, I'm still experimenting with Openshift and some other services. But I also lost possibility do build and deploy project on demand. Fortunately many cloud hosted CI services popped up last year or two.

CircleCI

Predefined, not very extensive toolset for Java projects is avaliable. It has recognized my pom.xml automaticaly without any configuration. Uses circle.yml configuration file. It is does not have free plan (only trial), cheapest Solo plan is for $19/month.

Travis-CI

Java is quite well supported. It is cofigured using .travis.yml. There is free (unlimited) plan and also quite expensive paid plans

Codeship

Neat interface, but you are allowed to use only few preinstalled java tools. Free plan is limited to 50 builds per month, while Basic plan will cost you $49 per month.

drone.io

Simplistic with basic java support. Free unlimited plan.

Cloudbees DEV@Cloud

Jenkins in the cloud with all it's awesomeness. You can choose pretty much any Java, Maven or Gradle version you can imagine plus zillions of Jenkins plugins. Pricing is trickier here because billing is build time based and offering includes application hosting (you might not be interested in). Free plan includes 100 build minutes and there is also FOSS plan. Starter plan with 40 hours of build per month will cost you $60 + $17 = $77 per month

Another Cloudbees service is BuildHive where you will get unlimited number of builds, but you will use shared and slower Jenkins instance with very limited configuration and only for GitHub repositories.

Test drive with Phanbedder

I have recently built little library named Phanbedder and blogged about it few days ago. While very tiny in java code size, it is pretty unusual because it launches separate processes of bundeled PhantomJS native binary during the tests. This makes it perfect candidate for testing cloud CI services.

Good news is that every above mentioned service managed to compile and test it using mvn clean test -Denforcer.skip=true command. I use maven-enforcer-plugin to enforce Java 6 to be used for compilation. But because many of CIs offers only Java 7, I had to switch enforcer off...

Trickier part is execution of mvn deploy. I'm uploading maven snapshot artifacts into Sonatype OSSRH and also release versions later into Maven central. For snapshot deployment into Sonatype OSS Nexus, username and password must be provided to make maven-deploy-plugin work, otherwise...

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.8.1:deploy (default-deploy) on project phanbedder-1.9.7: Failed to deploy artifacts: Could not transfer artifact net.anthavio:phanbedder-1.9.7:jar:1.0.1-20140422.184656-7 from/to sonatype-oss-snapshots (https://oss.sonatype.org/content/repositories/snapshots): Failed to transfer file: https://oss.sonatype.org/content/repositories/snapshots/net/anthavio/phanbedder-1.9.7/1.0.1-SNAPSHOT/phanbedder-1.9.7-1.0.1-20140422.184656-7.jar. Return code is: 401, ReasonPhrase: Unauthorized. -> [Help 1]

Normaly, deployment credentials are stored inside your personal maven settings.xml, but obviously this file is not present on cloud CI server. Workarounds exists for Travis-ci. Cloudbees DEV@Cloud has most elegant solution, but I guess no deployment with undisclosed credentials can be done from BuildHive and others.

Now, here goes links to public success build logs for Travis-CI, drone.io, BuildHive and DEV@Cloud. Sadly CircleCI and Codeship does not seem to support public projects.

Mentioned CI services can be webhook triggered, provide integrated browser testing and deployment into popular cloud hosting services like Heroku or Google App Engine. They evolve pretty quickly as they add more integrations as I write... Listing and comparing features here is not worth the effort so check documentation pages.

Everybody has different needs. All my projects are open source hobby projects so I guess I'll go with webhook triggered snapshot builds on BuildHive or I might use Travis-CI to have it with snapshot deployments.

For Maven central release deployment builds, I see only one option - Cloudbees DEV@Cloud. In some next blog post, I'll describe how fully automatic deployment can be achieved using Cloudbees DEV@Cloud, maven-release-plugin and Sonatype OSSRH.

Happy cloud hosted CI builds!

Monday 21 April 2014

PhantomJS embedder for Selenium GhostDriver

There is quite a few Drivers for Selenium2/WebDriver. Two of them are particulary interesting, because they allow fast headless browser tests - HtmlUnitDriver and PhantomJSDriver.

Using HtmlUnitDriver is piece of cake, because HtmlUnit is pure java library, but disadvantage is limited JavaScript execution support, making it usable for mostly static html sites only.

PhantomJSDriver from GhostDriver project allows to employ PhantomJS, which is a headless WebKit, much much closer to real web browser. As native binary having it's dependencies staticaly linked, PhantomJS does not need any installation. Just unzip it somewhere.

Selenium 2 has annoying habit of needing to specify full path to browser binary when Driver instace is being created. For most common browsers like Chrome or Firefox, some basic discovery is performed, but it usualy fails for me and it is same story with PhantomJS now.

Most probably, you will get folowing exception

java.lang.IllegalStateException: The path to the driver executable must be set by the phantomjs.binary.path capability/system property/PATH variable; for more information, see https://github.com/ariya/phantomjs/wiki. The latest version can be downloaded from http://phantomjs.org/download.html
 at com.google.common.base.Preconditions.checkState(Preconditions.java:177)
 at org.openqa.selenium.phantomjs.PhantomJSDriverService.findPhantomJS(PhantomJSDriverService.java:237)
 at org.openqa.selenium.phantomjs.PhantomJSDriverService.createDefaultService(PhantomJSDriverService.java:182)
 at org.openqa.selenium.phantomjs.PhantomJSDriver.<init>(PhantomJSDriver.java:99)

Another obstacle usualy is different Operating System between developer's machine (MacOS, Windows) and continuous integration server (Linux). Because PhantomJS is native application, therefore every OS needs it's own binary to execute and you'll need to distribute right versions to every host that will ever build/test your project.

To escape from both annoyances, I've created Phanbedder. Tiny library that bundles PhantomJS binaries and unpacks right one for you on any of supported platform - Linux, Windows and Mac OSX.


File phantomjs = Phanbedder.unpack(); //Phanbedder to the rescue!
DesiredCapabilities dcaps = new DesiredCapabilities();
dcaps.setCapability(PhantomJSDriverService.PHANTOMJS_EXECUTABLE_PATH_PROPERTY, phantomjs.getAbsolutePath());
PhantomJSDriver driver = new PhantomJSDriver(dcaps);

//Usual Selenium stuff follows
try {
 driver.get("https://www.google.com");
 WebElement query = driver.findElement(By.name("q"));
 query.sendKeys("Phanbedder");
 query.submit();
 Assertions.assertThat(driver.getTitle()).contains("Phanbedder");
} finally {
 driver.quit();
}

To run code above you'll need following dependencies. Number 1.9.7. in artifactId stands for PhantomJS version bundeled inside.

    
      net.anthavio
      phanbedder-1.9.7
      1.0.0
    

    
      com.github.detro.ghostdriver
      phantomjsdriver
      1.1.0
    

Because this library is targeting various OSes, testing is tricky. I have tested it on Mac OS X 10.6, Windows 7 and I've also gave it a spin on travis-ci (good) and Cloudbees Jenkins (great). Source code is hosted on GitHub and deployed into maven central again using Cloudbees Cloud@DEV

Happy Phanbedding!

Tuesday 15 April 2014

How many Base64 encoders is in JDK/JRE

Base64 variants

Base64 encoding is an algorithm that converts binary data into ASCII. Resulting string consist of characters A-Z,a-z,0-9 and two extra '+' (plus) and '/' (slash) and also padding character '=' (equals). Conversion does not happens the same every time, there is few variants of it.

- Simple (basic) encoding creates single longlonglonglonglonglonglonglonglonglonglonglooooong= base64 encoded line.

- Fixed line lenght encoding, sometimes also called Mime base64 encoding or chunked encoding or simply line folding. Instead of single long line, it produces multiple lines usualy 76 characters long. It is quite important, because it is mandatory in some use scenarios (Binary attachments) , while harmful in others (BASIC authentication header).

- URL (safe) encoding produce string that can be used as parameter value in URL. Because '+' and '/' are not allowed, they are encoded as '-' and '_' while '=' padding character is usually removed.

Now back to initial question...

How many Base64 encoders/decoders is present in Oracle (Sun) JDK?

I found 6 of them

sun.misc.BASE64Encoder Since Java 1.0? Well we all know that we should not touch anything from sun.* or com.sun.* packages. So we don't.

javax.xml.bind.DatatypeConverter Since Java 1.6 - This one actually works, but allows you only basic encoding. No mime or url encoding.

java.util.Base64 Since Java 1.8 - Finaly generaly usable Base64 encoder/decoder allowing basic, mime and url safe encoding.

And finaly some curiosities illustrating how even Sun/Oracle JDK/JRE contributors were missing Base64 encoder, so they created their own.

java.util.prefs.Base64 Since Java 1.4, but has default (package) visibility, therefore not usable

com.sun.net.httpserver.Base64 Since Java 1.6, but has default (package) visibility, therefore not usable

com.sun.org.apache.xml.internal.security.utils.Base64 - Similar story as sun.misc.BASE64Encoder, it also internaly uses XMLUtils.ignoreLineBreaks() to perform line folding...

Let's see some encoding results

Both commons-codec 1.6+ and Java8 java.util.Base64 can produce and consume any of mentioned base64 variants, but beware of quite different encoding results. I think that a lot of headaches is coming because of that.

In following test, commons-codec 1.9 and Java8u5 is used

Mime (chunked) encoding
import org.apache.commons.codec.binary.Base64;

String string = "This string encoded will be longer that 76 characters and cause MIME base64 line folding";
 
byte[] encodeBase64Chunked = Base64.encodeBase64Chunked(string.getBytes());
System.out.println("commons-codec Base64.encodeBase64Chunked\n" + new String(encodeBase64Chunked));

String encodeMimeToString = java.util.Base64.getMimeEncoder().encodeToString(string.getBytes());
System.out.println("java.util.Base64.getMimeEncoder().encodeToString\n" + encodeMimeToString);
prints
commons-codec Base64.encodeBase64Chunked
VGhpcyBzdHJpbmcgZW5jb2RlZCB3aWxsIGJlIGxvbmdlciB0aGF0IDc2IGNoYXJhY3RlcnMgYW5k
IGNhdXNlIE1JTUUgYmFzZTY0IGxpbmUgZm9sZGluZw==

java.util.Base64.getMimeEncoder().encodeToString
VGhpcyBzdHJpbmcgZW5jb2RlZCB3aWxsIGJlIGxvbmdlciB0aGF0IDc2IGNoYXJhY3RlcnMgYW5k
IGNhdXNlIE1JTUUgYmFzZTY0IGxpbmUgZm9sZGluZw==

Java8 mime Encoder ends with '==' padding and does not add last newline (CR/LF) after that!

URL (safe) encoding
String string = "ůůůůů";

String encodeUrlToString = java.util.Base64.getUrlEncoder().encodeToString(string.getBytes());
System.out.println("java.util.Base64.getUrlEncoder().encodeToString\n" + encodeUrlToString);

String encodeBase64URLSafeString = Base64.encodeBase64URLSafeString(string.getBytes());
System.out.println("commons-codec Base64.encodeBase64URLSafeString\n" + encodeBase64URLSafeString);
prints
java.util.Base64.getUrlEncoder().encodeToString
xa_Fr8Wvxa_Frxc=
commons-codec Base64.encodeBase64URLSafeString
xa_Fr8Wvxa_Frxc
Java8 url Encoder leaves padding '=' at the end of the result, which makes it unusable as URL parameter value!

UPDATE: This was reported while ago and it has turned out, that any Encoder can be switched into non-padding using withoutPadding() method.

String string = "ůůůůů";
String encodeUrlToString = Base64.getUrlEncoder().withoutPadding().encodeToString(string.getBytes());
System.out.println("java.util.Base64.getUrlEncoder().withoutPadding().encodeToString\n" + encodeUrlToString);
prints
java.util.Base64.getUrlEncoder().withoutPadding().encodeToString
xa_Fr8Wvxa_Frw

Note: In quite old commons-codec 1.4 chunking was incostitently turned on by default for encode() method, resulting in nasty surprises. See Jira ticket.

Happy Base64 encoding

Friday 11 April 2014

Spring MockMvc tests

MVC Test Framework arrived with Spring 3.2. It allows to write integration tests (well... almost) for your Spring MVC @Controller(s)

One of server-side cornerstones is MockMvc class. It allows to execute requests on your Controllers very easily, but it needs to be initialized before it is used. Threre are two ways to initialize MockMvc and choice depends on how broad integration with other Spring MVC components your tests need. In my case, it was custom view resolver and few others I removed for sake of simplicity.

First is meant for simple single Controller testing

    @Test
    public void foo() {
        TestedController controller = new TestedController();
        MyCustomViewResolver resolver = new MyCustomViewResolver();
        MockMvc mockMvc = MockMvcBuilders.standaloneSetup(controller).setViewResolvers(resolver).build();

        MvcResult result = this.mockMvc.perform(MockMvcRequestBuilders.get("/foo")).andReturn();
        //parse and assert response.getContentAsString() as view was resolved by MyCustomViewResolver
    }
You can employ extensive set of spring-mvc usual suspects, like ConversionService, ViewResolvers, MessageConverters, ... see StandaloneMockMvcBuilder javadoc

All this manual assembling makes me feel little unconfortable. Normaly all those beans are wired together in Spring Dispatcher context.

Another way to initlialize MockMvc is using @WebAppConfiguration and MockMvcBuilders.webAppContextSetup(webAppContext)

@RunWith(SpringJUnit4ClassRunner.class)
@WebAppConfiguration
@ContextConfiguration(classes = TestMvcSpringConfig.class)
public class FooTests {

    @Autowired
    private WebApplicationContext webAppContext;

    private MockMvc mockMvc;

    @Before
    public void setup() {
        this.mockMvc = MockMvcBuilders.webAppContextSetup(webAppContext).build();
    }

    @Test
    public void foo() {
        MvcResult result = this.mockMvc.perform(MockMvcRequestBuilders.get("/foo")).andReturn();
        MockHttpServletResponse response = result.getResponse();
        //parse and assert response.getContentAsString() as view was resolved by MyCustomViewResolver
    }

    @EnableWebMvc
    @Configuration
    public static class TestMvcSpringConfig extends WebMvcConfigurerAdapter {

        @Override
        public void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer) {
            configurer.enable();
        }

        @Bean
        public TestedController getController() {
            return new TestedController();
        }

        @Bean
        public ViewResolver viewResolver() {
            MyCustomViewResolver resolver = new MyViewCustomResolver();
            return resolver;
        }
    }

For some unknown reason (that disappeared as quickly as it appeared) Spring was failing to create @EnableWebMvc annotated @Configuration, complaining about missing servlet context. I had to get my hands dirty and roll my own Spring web context. So for completeness, here is how it was done using MockServletContext. May be useful sometime.


    @Test
    public void test() throws Exception {
        MockServletContext servletContext = new MockServletContext();

        AnnotationConfigWebApplicationContext springContext = new AnnotationConfigWebApplicationContext();
        springContext.setServletContext(servletContext);
        springContext.register(TestMvcSpringConfig.class);
        springContext.refresh();

        MockMvc mockMvc = MockMvcBuilders.webAppContextSetup(springContext).build();

        MvcResult result = mockMvc.perform(MockMvcRequestBuilders.get("/foo")).andReturn();
    }

Happy Spring MVC testing!

Thursday 10 April 2014

RPM upgrade and embedded Jetty

Java web application I'm working on recently (let's name it lobster) is embedding and packaging Jetty inside itself. This makes application artifact more self-contained and independent, because it does not require preinstalled and preconfigured server.

To simplify application deployment even more, it is packaged as RPM file, which is uploaded into Nexus serving as Yum repository.

Deployment then is simple matter of executing sudo yum install lobster and thanks to RPM scriptlets, server is automaticaly restarted as a part or installation.

Everything went nice and smoothly, until second release. When yum update happend, we have found how amazingly wierd thing RPM upgrade is.

sudo yum update lobster execution completed, but application failed to start and logfile contained strange exceptions
Caused by: java.io.FileNotFoundException: /opt/lobster/lib/lobster-core-0.4.0.jar (No such file or directory)
 at java.util.zip.ZipFile.open(Native Method) ~[na:1.7.0_40]
 at java.util.zip.ZipFile.(ZipFile.java:215) ~[na:1.7.0_40]
 at java.util.zip.ZipFile.(ZipFile.java:145) ~[na:1.7.0_40]
 at java.util.jar.JarFile.(JarFile.java:153) ~[na:1.7.0_40]
 at java.util.jar.JarFile.(JarFile.java:90) ~[na:1.7.0_40]
 at sun.net.www.protocol.jar.URLJarFile.(URLJarFile.java:93) ~[na:1.7.0_40]
 at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:69) ~[na:1.7.0_40]
 at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:99) ~[na:1.7.0_40]
 at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122) ~[na:1.7.0_40]
 at sun.net.www.protocol.jar.JarURLConnection.getJarFile(JarURLConnection.java:89) ~[na:1.7.0_40]

What was wierd even more, that it was lobster version 0.5.0 installation and 0.4.0 stated in stacktrace, actualy was previous version!

To make story short, after some googling I've found RPM upgrade sequence.

  1. execute new version %pre [ $1 >= 2 ]
  2. unpack new version files (both old and new are now mixed together)
  3. execute new version %post [ $1 >= 2 ]
  4. execute old version %preun [ $1 == 1 ]
  5. remove old files (only new and unchanged stays)
  6. execute old verion %postun [ $1 == 1 ]

Important is, that between step 2 and 5, mixture of old and new version jar files is present in installation directory! Attempt to start server java process in %post or %preun scriptlet can only result in disaster same as we experienced. Old version jars will be deleted right after scriptlet execution.

Here comes working install/upgrade/uninstall solution

%pre scriptlet

#!/bin/sh
# rpm %pre scriptlet
#
# parameter $1 means
# $1 == 1 ~ initial installation
# $1 >= 2 ~ version upgrade
# never executed for uninstall

echo "rpm: pre-install $1"

# failsafe commands - can't break anything

# make sure that user exist
id -u lobster &>/dev/null || useradd lobster

# make sure that application is not runnig
if [ -f /etc/init.d/lobster ]; then
 /sbin/service lobster stop
fi

%post scriptlet

Important is NOT to start application on rpm upgrade
#!/bin/sh
# rpm %post scriptlet
#
# parameter $1 means
# $1 == 1 ~ initial installation
# $1 >= 2 ~ version upgrade
# never executed for uninstall

echo "rpm: post-install $1"

# initial install
if [ "$1" -eq "1" ]; then
 /sbin/chkconfig --add lobster
 /sbin/service lobster start
fi

%preun scriptlet

#!/bin/sh
# rpm %preun scriptlet
#
# parameter $1 means
# $1 == 0 ~ uninstall last
# $1 == 1 ~ version upgrade
# never executed for first install

echo "rpm: pre-uninstall $1"

# uninstall last
if [ "$1" -eq "0" ]; then
 /sbin/service lobster stop
 /sbin/chkconfig --del lobster
fi

%postun scriptlet

Here is right place to start application on rpm upgrade
#!/bin/sh
# rpm %postun scriptlet
#
# parameter $1 means
# $1 == 0 ~ uninstall last
# $1 == 1 ~ version upgrade
# never executed for first install

# console output is surpressed for postun% !
echo "rpm: post-uninstall $1"

# upgrade
if [ "$1" -ge "1" ]; then
 /sbin/service lobster start
fi

Useful RPM scriptlet documentation is in Fedora Wiki also how to integrate with SysV Init Scripts

Happy RPM deployments!

Monday 24 March 2014

Spring Security with multiple AuthenticationManagers

Spring security (Acegi security once) configuration was for a long time quite exhausting task. Gluing together all filters, entry points, success handlers and authentication providers was not small price to pay for overwhelming flexibility of this awesome framework.

Starting with version 2.0, simplified namespace configuration (<sec:http/>) was introduced, allowing most common setups to be configured with just a few lines. Not much get changed with version 3.0.

However, this new configuration style introduced also one important limitation - only single one security filter chain with single AuthenticationManager can be configured using it. When you happen to have web application with two faces - Web Pages and REST endpoints, having quite different authentication requirements, you are in trouble. Only option is fallback to traditional bean by bean configuration.

Following gist shows how it can be done

What a massive piece of xml!!!

Limitation of single security filter chain was removed in version 3.1, which allowed to have multiple <http authentication-manager-ref="..."/> elements, each possibly with different AuthenticationManager.

Latest and greatest version 3.2 brought long time awaited Java Configuration, with sweet @EnableWebSecurity and WebSecurityConfigurerAdapter combo. To do not repeat same mistake again, this funny trick can be used to define multiple filter chains and AutheticationManagers.

But as an exercise, I also tried to disassemble configuration into old school bean by bean way. Let's call that poor man's spring security java config.

Following gist shows how it can be done

While this is not as complex setup as xml based example before, it is still big chunk of code.

Happy Spring securing!

Monday 3 March 2014

Logback with rsyslog

In company I work for, developers don't have any access to Live servers. While it is good from the responsibility division reasons, it makes problem and error tracking painful. Most often we need access to log files without asking somebody from operations team every single time.

To overcome this, we went predictable way of setting up company-wide log aggregation server. While there is plethora of solutions, one is quite easy to setup if you are on Unix/Linux - rsyslog It is widely used to provide standard syslog feature so there is good chance that you have it installed already.

I will not talk much about rsyslog server. It listens usualy on UDP port 514 and has configuration file /etc/rsyslog.conf. Excelent article about installation and basic configuration is here.

Any (client host) rsyslog can be configured to forward it's syslog messages into server rsyslog. This might be right way, when you develop native or RoR application and you log into local syslog, but Java world has it's well established logging libraries, namely Logback or Log4j. Both of them provide appanders capable of sending log events into rsyslog server, while logging into local files as usualy.

Anyway, if you are already logging directly into local syslog, and you need to forward messages into log server, just add to the end of /etc/rsyslog.conf file.

*.* @logs.mycompany.com:514
You can use DNS name or IP addres of your rsyslog server. 514 is default rsyslog port.

To log from java directly to rsyslog server, skipping local syslog completely, add SyslogAppender into logback.xml.
Logback does not provide explicit configuration for TAG aka APP-NAME aka $programname, but it can be emulated by using special syntax of the first word of the log message.
It must start with ' ' (whitespace) and end with ':' (colon) See ' mycoolapp:' in suffixPattern element value. Logback configuration feature ticket exist, but it seems to be abandoned. I don't use Log4J, so here is just a link to article about Log4j and syslog. But I don't think it covers TAG usage.

Back on server, log messages from muliple hosts and applications are written, by default, into single /vsr/log/messages file, making it quite messy. Fortunately rsyslog provides easy way of routing messages coming from different hosts or applications into different files.

You can split messages using hostname or ip address they are coming from, as described here or you can split them on log message content, as described here. You can throw them away too, as described here.

What we choose, is routing messages belonging to same application into single log file. Messages are coming from different hosts because application runs in cluster. Every log line contains hostname, so origin still can be recognized.
if $programname == 'mycoolapp' then /var/log/mycoolapp/mycoolapp.log
To make this work, client must send TAG ($programname) field correctly as described above in logback.xml

Happy rsyslogging

Wednesday 26 February 2014

Quartz on H2 with MODE=MYSQL

We are using embedded H2 database for local developement, but application, when deployed, uses MySQL. H2 database has cool compatibility mode feature allowing it to mimic another RDBMS so you can use it's non standard SQL statements. Level of compatibility is not 100% of course.

While Hibernate runs nicely on H2 with MySQLInnoDBDialect configured, trouble starts when you'll try run Quartz Scheduler (2.2.1) on H2 even in MYSQL mode. You'll face exceptions like...

org.quartz.JobPersistenceException: Couldn't store job: Value too long for column "IS_DURABLE VARCHAR(1) NOT NULL": "'TRUE' (4)"; SQL statement:
INSERT INTO QRTZ_JOB_DETAILS (SCHED_NAME, JOB_NAME, JOB_GROUP, DESCRIPTION, JOB_CLASS_NAME, IS_DURABLE, IS_NONCONCURRENT, IS_UPDATE_DATA, REQUESTS_RECOVERY, JOB_DATA)  VALUES('booster-scheduler', ?, ?, ?, ?, ?, ?, ?, ?, ?) [22001-175]
 at org.quartz.impl.jdbcjobstore.JobStoreSupport.storeJob(JobStoreSupport.java:1118) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.jdbcjobstore.JobStoreSupport$3.executeVoid(JobStoreSupport.java:1090) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.jdbcjobstore.JobStoreSupport$VoidTransactionCallback.execute(JobStoreSupport.java:3703) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.jdbcjobstore.JobStoreSupport$VoidTransactionCallback.execute(JobStoreSupport.java:3701) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.jdbcjobstore.JobStoreCMT.executeInLock(JobStoreCMT.java:245) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.jdbcjobstore.JobStoreSupport.storeJob(JobStoreSupport.java:1086) ~[quartz-2.2.1.jar:na]
 at org.quartz.core.QuartzScheduler.addJob(QuartzScheduler.java:969) ~[quartz-2.2.1.jar:na]
 at org.quartz.core.QuartzScheduler.addJob(QuartzScheduler.java:958) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.StdScheduler.addJob(StdScheduler.java:268) ~[quartz-2.2.1.jar:na]
 at org.springframework.scheduling.quartz.SchedulerAccessor.addJobToScheduler(SchedulerAccessor.java:342) ~[spring-context-support-4.0.1.RELEASE.jar:4.0.1.RELEASE]
 at org.springframework.scheduling.quartz.SchedulerAccessor.addTriggerToScheduler(SchedulerAccessor.java:365) ~[spring-context-support-4.0.1.RELEASE.jar:4.0.1.RELEASE]
 at org.springframework.scheduling.quartz.SchedulerAccessor.registerJobsAndTriggers(SchedulerAccessor.java:303) ~[spring-context-support-4.0.1.RELEASE.jar:4.0.1.RELEASE]
 at org.springframework.scheduling.quartz.SchedulerFactoryBean.afterPropertiesSet(SchedulerFactoryBean.java:514) ~[spring-context-support-4.0.1.RELEASE.jar:4.0.1.RELEASE]
 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1612) ~[spring-beans-4.0.1.RELEASE.jar:4.0.1.RELEASE]
 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1549) ~[spring-beans-4.0.1.RELEASE.jar:4.0.1.RELEASE]
 ... 56 common frames omitted
Caused by: org.h2.jdbc.JdbcSQLException: Value too long for column "IS_DURABLE VARCHAR(1) NOT NULL": "'TRUE' (4)"; SQL statement:
INSERT INTO QRTZ_JOB_DETAILS (SCHED_NAME, JOB_NAME, JOB_GROUP, DESCRIPTION, JOB_CLASS_NAME, IS_DURABLE, IS_NONCONCURRENT, IS_UPDATE_DATA, REQUESTS_RECOVERY, JOB_DATA)  VALUES('booster-scheduler', ?, ?, ?, ?, ?, ?, ?, ?, ?) [22001-175]
 at org.h2.engine.SessionRemote.done(SessionRemote.java:589) ~[h2-1.3.175.jar:1.3.175]
 at org.h2.command.CommandRemote.executeUpdate(CommandRemote.java:186) ~[h2-1.3.175.jar:1.3.175]
 at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:154) ~[h2-1.3.175.jar:1.3.175]
 at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:140) ~[h2-1.3.175.jar:1.3.175]
 at com.jolbox.bonecp.PreparedStatementHandle.executeUpdate(PreparedStatementHandle.java:205) ~[bonecp-0.8.0.RELEASE.jar:na]
 at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.insertJobDetail(StdJDBCDelegate.java:624) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.jdbcjobstore.JobStoreSupport.storeJob(JobStoreSupport.java:1112) ~[quartz-2.2.1.jar:na]
 ... 70 common frames omitted

Reason behing this is that current H2 version (1.3.155) is unable to perform automatic conversion from boolean into VARCHAR(1) like MySQL can do. There is recent discussion about implementing this, but until released, you can make it work simply by changing VARCHAR(1) into BOOLEAN inside Quartz schema tables_mysql_innodb.sql.

Following gist shows result

We can only guess why Quartz guys are sticking with VARCHAR(1) when BOOLEAN makes much more sense.

Happy scheduling

Friday 17 January 2014

Openshift System properties and Environment variables

For cloud deployable application, it is advisable to create them as much self-contained as possible. If application allows some configuration, these settings should be packaged inside war file and used as default values, while value override mechanism is provided.

Linux Environment variables

Unfortunately, following (usual) way of setting environment variable will not work on Openshift.
ctl_app stop jbossews
export MY_ENV_VAR="my_env_var_value"
ctl_app start jbossews
To be precise, JAVA_OPTS_EXT variable set like this, will not visible to java process running your application. To set it properly, you have to do it using Openshift rhc tool or using shortcut
echo "my_env_var_value" > ~/.env/user_vars/MY_ENV_VAR
An now you can in your webapp get variable value
  String my_env_var = System.getenv("MY_ENV_VAR"); //"my_env_var_value"
  if (my_env_var == null) {
    my_env_var = "Default value..."; //I told you not to depend on external configuration and provide default values...
  }

Java System Properties

System Properties are most commonly used to pass parameters into Java application. They are specified as java "-D" prefixed command-line parameters:

java -Dmyapp.param=whatever ...
then they can be obtained inside application
String param = System.getProperty("myapp.param"); //"whatever"

But Openshift managed servers are started using ctl_app start ... (or gears start ...) command and we are not directly in control of executing java so we can't add "-D" parameters.

Trick is to use JAVA_OPTS_EXT evironment variable. Since we know how to set Openshift Environment variable, it is just matter of:

echo "-Dmyapp.param=myapp.value" > .env/user_vars/JAVA_OPTS_EXT
...then restart your server...
ctl_app stop jbossews
ctl_app start jbossews
...and enjoy fruits of your effort (look for -Dmyapp.param=myapp.value)...
[wotan-anthavio.rhcloud.com 52d184e45973ca0bc0000088]\> jps -mlv
29444 sun.tools.jps.Jps -mlv -Dapplication.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45 -Xms8m
24633 org.apache.catalina.startup.Bootstrap start -Xmx256m -XX:MaxPermSize=102m -XX:+AggressiveOpts -DOPENSHIFT_APP_UUID=52d184e45973ca0bc0000088 -Djava.util.logging.config.file=/var/lib/openshift/52d184e45973ca0bc0000088/jbossews//conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.net.preferIPv4Stack=true -Dfile.encoding=UTF-8 -Djava.net.preferIPv4Stack=true -Dmyapp.param=myapp.value -Dcatalina.base=/var/lib/openshift/52d184e45973ca0bc0000088/jbossews/ -Dcatalina.home=/var/lib/openshift/52d184e45973ca0bc0000088/jbossews/ -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/lib/openshift/52d184e45973ca0bc0000088/jbossews//tmp

Wednesday 15 January 2014

Deploy to Openshift from Github repository

On Cloudbees RUN@Cloud or Jelastic cloud PaaS, it is simple to build application war on your devbox and then use provided web interface to simply upload and deploy it.

Openshift does not offer such deployment option right away, and similarily to Heroku it encourages you to do builds and deployments via git push commit hook. First I'll do deployment to Openshift encouraged and documented way, but because it actually is possible to deploy externaly built war to Openshift, I'll describe how to perform it too.

I have existing quite simple, Maven built, multimodule application Wotan hosted in Github repository. One of it's submodules - wotan-browser is web application packaged as war. Plan is to deploy this webapp to Openshift.

Create Account and some Gears

Openshift account comes with 3 free Gears (simultaneously running JVMs). I've signed in and then created Gear named "wotan" with "Tomcat 7 (JBoss EWS 2.0)" cartridge. Same can be done using rhc app create wotan jbossews-2.0 command I also added "Jenkins Client" cartridge, which creates Jenkins instance that occupies second Gear. To execute Jenkins build, third and last free Gear is required. Since that escalated quickly, I'll later show how to build web application without Jenkins to save two Gears

In order to simplify authentication, I've configured same ssh public key I'm using for Github on Settings page. I strongly encourage you to do it as well. If it happend that you do not have any ssh key, create one using ssh-keygen or puttygen if you are on Windows.

What Openshift provides

Openshift gives you real linux system account, which is nice. It's big difference comparing to other cloud platforms, where you have only limited interface (web or SDK) to interact with your server and applications.
You can login into...
ssh 52d184e45973ca0bc0000088@wotan-anthavio.rhcloud.com
...look around for a while...
[wotan-anthavio.rhcloud.com 52d184e45973ca0bc0000088]\> ls -gG
total 20
drwxr-xr-x.  4 4096 Jan 14 15:19 app-deployments
drwxr-xr-x.  4 4096 Jan 11 12:52 app-root
drwxr-xr-x.  3 4096 Jan 11 12:53 git
drwxr-xr-x. 13 4096 Jan 11 12:53 jbossews
drwxr-xr-x.  8 4096 Jan 11 12:57 jenkins-client
...but most interesting directories are...
  • jbossews - Server from Tomcat 7 (JBoss EWS 2.0) cartridge. This is ultimate place where webapps and logs will be. If you have picked another cartridge type for your application, your directory will be different of course
  • git - Git repository with default simple web application
Git repository comes prepopulated with default simple web application. I'm gonna replace it with my Github repository content, but it is worth to at least check it out and look how the vanilla pom.xml looks like.
git clone ssh://52d184e45973ca0bc0000088@wotan-anthavio.rhcloud.com/~/git/wotan.git/
Quite important is .openshift directory because it contains various Openshift metadata and configuration. See documentation page

Default simple application is also deployed on Gear creation and Jenkins instance with preconfigured build Job You can start new build and see on Applications page, how one Gear will become occupied by build process and disappears after while. It is also useful to examine build console output to see what precisely get executed and how - java version, maven parameters, etc...

Deploy using Openshift Jenkins

On my devbox I have my Github repo checked out (On Mac OSX and Windows7 PC with Cygwin)

git clone git@github.com:anthavio/wotan.git
Now merge Github repo into Openshift repo. Steps are taken from stackoverflow so I'll just list commands

git remote add openshift -f ssh://52d184e45973ca0bc0000088@wotan-anthavio.rhcloud.com/~/git/wotan.git/
git merge openshift/master -s recursive -X ours
git push openshift HEAD

Every git push openshift HEAD starts new Jenkins build. It fails at first because pom.xml is not ready for Openshift and tweaks has to be done. Let's examine these tweaks.

Maven pom.xml changes for Openshift

Whole pom.xml can be found here
  • To use dependencies not present in Maven Central repository, additional repository (sonatype-oss-public) has been added
  • You might seen in vanilla Openshift pom.xml slightly changed maven-war-plugin configuration. Contract for deployment is that all deployable artifacts must be present in webapps directory and have nice file name, because this name will became part of url
  • Turn off maven-enforcer-plugin - this might not be necessary for most of developers. I enforce conservative Java6, but Openshift runs on Java7

Skip OpenShift git repository

Actually you need not to use Openshift git repository at all. Just go to the Jenkins Job configuration and change Git repository URL from ssh://52d184e45973ca0bc0000088@wotan-anthavio.rhcloud.com/~/git/wotan.git to use directly Github repository https://github.com/anthavio/wotan.git and forget Openshift git repo forever...

Deploy pre-built war

Now I'll use another web application of mine - Disquo. I'll build war on my devbox and deploy it manualy. No Jenkins involved.
# build war localy
git clone git@github.com:anthavio/disquo.git /tmp/disquo.git
cd /tmp/disquo.git
mvn clean package -Dmaven.test.skip=true

# upload assembled war
scp disquo-web/target/disquo-web-1.0.0-SNAPSHOT.war 52d184e45973ca0bc0000088@wotan-anthavio.rhcloud.com:~/app-root/repo/webapps/disquo.war
ssh 52d184e45973ca0bc0000088@wotan-anthavio.rhcloud.com
# stop tomcat & deploy webapp & start tomcat
ctl_app stop jbossews
cp app-root/repo/webapps/disquo.war jbossews/webapps
ctl_app start jbossews

# Check Tomcat log file
tail -f jbossews/logs/catalina.out
Deploying this way has nice effects
  • Jenkins gear and building gear are not required, can be removed and all 3 gears are avaliable for running applications
  • Multiple applications can be deployed into single Tomcat gear
Happy Openshift deployments folks!