Thursday, 16 March 2017

Apache Spark sample and coalesce

It is wasteful to work with large number of small partitions of data, especially when S3 is used as data storage. IO then becomes unacceptably large part of time spent in task, most annoyingly when Spark is just moving data files from _temporary location into final destination, after real work has been completed.

Small tip how to down-sample dataset but keeping resulting partitions sizes similar to original dataset. Key element is to get original number of partition, decrease it accordingly and coalesce RDD with it.

It is useful when you test your Spark job on single node (or few) but with the amount of data you expect production cluster nodes should be able to handle.


  def sample(sparkCtx: SparkContext, inputPath: String, outputPath: String, fraction: Double, seed: Int = 1) = {
    val inputRdd = sparkCtx.textFile(inputPath)
    val outputPartitions = (inputRdd.partitions.length * fraction).toInt
    inputRdd
    	.sample(false, fraction, seed)
    	.coalesce(outputPartitions, shuffle = false)
        .saveAsTextFile(outputPath, codec = classOf[GzipCodec])
  }

Happy sampling!

Monday, 20 June 2016

Maven, SBT, Gradle local repository sharing

I've run into environment where different project were using different build tools - Maven, Gradle and SBT. Problem was how to reuse build artifacts between them and here goes small summary of my investigation. I will cover only local file repositories and NOT publishing/retrieving to/from remote repositories.

TL;DR every tool is able to use publish into and retrieve from Maven local repository

I'm using today's latest versions of Maven 3.3.9, Gradle 2.14, SBT 0.13.11 While Maven dependency management and artifact publishing is pretty stable, SBT and Gradle are still evolving quite a lot so check your versions carefully.

All build tools (unless you override default configuration) use your home directory to keep local repository inside of it. It is different on various operating systems and with my user name mvanek, then it will be

  • Linux ${USER_HOME} is /home/mvanek
  • Mac OS X ${USER_HOME} is /Users/mvanek
  • Windows 10 %USERPROFILE% is C:\Users\mvanek

Maven 3.3.9

  • mvn package - Operates inside target subdirectory of your project
  • mvn install - Also copy jar into ${USER_HOME}/.m2/repository

SBT 0.13.11

sbt build operates inside target subdirectory of your project

Let's have following simple build.sbt:
organization := "bt"
name := "bt-sbt"
version := "1.0.0-SNAPSHOT"

scalaVersion := "2.11.7"
sbtVersion := "0.13.11"

resolvers += Resolver.mavenLocal // Also use $HOME/.m2/repository

libraryDependencies += "commons-codec" % "commons-codec" % "1.10" 
libraryDependencies += "org.scalatest" %% "scalatest" % "2.2.6" % "test"

SBT -> Ivy

SBT is using Apache Ivy internally so executing sbt publish-local (which same as sbt publishLocal) will build and store produced jars into ${USER_HOME}/.ivy2/local/bt/bt-sbt_2.11 subdirectories Because Scala applications are bound to Scala version they are compiled with, Ivy module/Maven artifactId/Jar file name will be bt-sbt_2.11 to reflect that.

SBT -> Maven

Since SBT 0.13.0 publishing into Maven local repo is trivial. Execute sbt publish-m2 which is same as sbt publishM2 to build and store compiled binary, source and javadoc jars into ${USER_HOME}/.m2/repository/bt/bt-sbt_2.11/1.0.0-SNAPSHOT

Note that for file cache of remote repository's artifacts SBT uses ${USER_HOME}/.ivy2/cache directory.

Gradle 2.14

Gradle by itself does not have concept of local repository. gradle build - Operates inside build subdirectory of your project

Gradle's build behaviour changes with plugins applied inside build.gradle. Let's assume only java plugin is used in simple build.gradle file
apply plugin: 'java'

group = 'bt'
version = '1.0.0-SNAPSHOT'

task sourceJar(type: Jar) {
    from sourceSets.main.allJava
}

repositories {
    mavenCentral()
}

dependencies {
 compile group: 'commons-codec', name: 'commons-codec', version: '1.10'
 testCompile group: 'junit', name: 'junit', version: '4.12'
}

Name of the directory that contains Gradle project files is taken project name. This is important because project name will become Maven artifactId and it cannot be redefined in build.gradle. You can redefine it using settings.gradle file with line rootProject.name = 'bt-gradle'. There are also options to configure it just for individual plugins but I'd not recommend it because then you need to configure it for every plugin that might be affected (some publish plugin for example)

Gradle -> Maven

For sharing with Maven, probably easiest choice is maven plugin

Apply plugin in your build.gradle apply plugin: 'maven' then execute build via gradle install and your jar will land in ${USER_HOME}/.m2/repository/bt/bt-gradle/1.0.0-SNAPSHOT/bt-gradle-1.0.0-SNAPSHOT.jar

Another Gradle -> Maven option is maven-publish plugin which might be good choice if you also plan to publish artifacts into remote repository as well.

Add into build.gradle

apply plugin: 'maven-publish'

publishing {
    publications {
        mavenJava(MavenPublication) {
            from components.java
        }
    }
}
and execute build using gradle publishToMavenLocal

Gradle -> Ivy

Because SBT is using Ivy distribution instead of Maven, ivy-publish plugin should be weapon of choice for this.
apply plugin: 'ivy-publish'

publishing {
    publications {
        ivyJava(IvyPublication) {
            from components.java
        }
    }
    
    repositories {
        ivy {
            url "${System.properties['user.home']}/.ivy2/local"
        } 
    }
}
then execute buid using gradle publishIvyJavaPublicationToIvyRepository and jar will appear in ${USER_HOME}/.ivy2/local/bt/gradle-built/1.0.0-SNAPSHOT/gradle-built-1.0.0-SNAPSHOT.jar. This is unfortunately wrong as it is Maven layout, not Ivy. You can add layout explicitly
    repositories {
        ivy {
            layout "ivy"
            url "${System.properties['user.home']}/.ivy2/local"
        } 
    }
which is better and will work for simple jar build but it will fail when you attach sources. Yet another reason why I'll never use Gradle unless I'm tortured for a week at least.

Note that for file cache of remote repository's artifacts Gradle uses ${USER_HOME}/.gradle/caches/modules-2/files-2.1 directory

Happy local jar sharing!

Tuesday, 22 March 2016

Fun with Spring Boot auto-configuration

I favour configuration as straighforward as possible. Unfortunately Spring Boot is pretty much opposite as it employs lots of auto-configuration. Sometimes it is way too eager and initializes everything it stumbles upon.

If you are building fresh new project, you might be spared, but when converting pre-existing project or building something slightly unusual, you will very likely meet Spring Boot initialization failures.

There are two main groups of initialization sources

  • Libraries in classpath - Which might be dragged into classpath as a transitive dependency of library you really use.
  • Beans in application context - Spring Boot auto-configuration often supports only single resource of some kind. Database DataSource or JMS ConnectionFactory for example. When you have more than one, initializer gets confused and fails.
  • Combination of both - otherwise it would be too easy

Complete list of auto configuration classes is listed in documantation. For the record, I'm using Spring Boot 1.3.3 and Spring Framework 4.2.5

Having simple Spring Boot application like this...

@SpringBootApplication
public class AutoConfigBoom {

    @Bean
    @ConfigurationProperties(prefix = "datasource.ds1")
    DataSource ds1() {
        return DataSourceBuilder.create().build();
    }

    public static void main(String[] args) {
        SpringApplication.run(AutoConfigBoom.class, args);
    }
}
... and application.properties...
datasource.ds1.driverClassName=org.mariadb.jdbc.Driver
datasource.ds1.url=jdbc:mysql://localhost:3306/whatever?autoReconnect=true
datasource.ds1.username=hello
datasource.ds1.password=dolly
...if you happen to have JPA implementation like Hibernate in classpath, JPA engine will be initialized automatically. You might spot messages in logging output...
2016-03-22 18:45:55,560|main      |INFO |o.s.o.j.LocalContainerEntityManagerFactoryBean: Building JPA container EntityManagerFactory for persistence unit 'default'
2016-03-22 18:45:55,577|main      |INFO |o.hibernate.jpa.internal.util.LogHelper: HHH000204: Processing PersistenceUnitInfo [
 name: default
 ...]
2016-03-22 18:45:55,639|main      |INFO |org.hibernate.Version: HHH000412: Hibernate Core {5.1.0.Final}
2016-03-22 18:45:55,640|main      |INFO |org.hibernate.cfg.Environment: HHH000206: hibernate.properties not found
2016-03-22 18:45:55,641|main      |INFO |org.hibernate.cfg.Environment: HHH000021: Bytecode provider name : javassist
2016-03-22 18:45:55,678|main      |INFO |org.hibernate.annotations.common.Version: HCANN000001: Hibernate Commons Annotations {5.0.1.Final}
2016-03-22 18:45:55,686|main      |WARN |org.hibernate.orm.deprecation: HHH90000006: Attempted to specify unsupported NamingStrategy via setting [hibernate.ejb.naming_strategy]; NamingStrategy has been removed in favor of the split ImplicitNamingStrategy and PhysicalNamingStrategy; use [hibernate.implicit_naming_strategy] or [hibernate.physical_naming_strategy], respectively, instead.
2016-03-22 18:45:56,049|main      |INFO |org.hibernate.dialect.Dialect: HHH000400: Using dialect: org.hibernate.dialect.MySQL5Dialect
If it is undesired as it only slows down application start up, exclude particular initializers...
@SpringBootApplication(
  exclude = { HibernateJpaAutoConfiguration.class, JpaRepositoriesAutoConfiguration.class })
public class AutoConfigBoom {
  ...
}
...which is just shortcut for...
@Configuration
@EnableAutoConfiguration(
  exclude = { HibernateJpaAutoConfiguration.class, JpaRepositoriesAutoConfiguration.class })
@ComponentScan
public class AutoConfigBoom {
  ...
}

To make things more spicy, let's declare second DataSource. Let's assume that XA Transactions spanning multiple transactional resources are not required.

@SpringBootApplication
public class AutoConfigBoom {

    @Bean
    @ConfigurationProperties(prefix = "datasource.ds1")
    DataSource ds1() {
        return DataSourceBuilder.create().build();
    }

    @Bean
    @ConfigurationProperties(prefix = "datasource.ds2")
    DataSource ds2() {
        return DataSourceBuilder.create().build();
    }

    public static void main(String[] args) {
        SpringApplication.run(AutoConfigBoom.class, args);
    }
}
If you have Hibernate JPA in classpath you will get...
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration': Injection of autowired dependencies failed; 
nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private javax.sql.DataSource org.springframework.boot.autoconfigure.orm.jpa.JpaBaseConfiguration.dataSource; 
nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ds2' defined in com.example.boom.AutoConfigBoom: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dataSourceInitializer': Invocation of init method failed; 
nested exception is org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type [javax.sql.DataSource] is defined: expected single matching bean but found 2: ds2,ds1
...because JPA engine is now confused which DataSource to use. If you really intend to use JPA EntityManager, then easiest solution is mark one DataSource with @Primary annotation...

    @Primary
    @Bean
    @ConfigurationProperties(prefix = "datasource.ds1")
    DataSource ds1() {
        return DataSourceBuilder.create().build();
    }

...otherwise exclude HibernateJpaAutoConfiguration.class, JpaRepositoriesAutoConfiguration.class from auto-configuration as shown above. @Primary annotation will also help you in case of...
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ds1' defined in com.example.boom.AutoConfigBoom: Initialization of bean failed; 
nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dataSourceInitializer': Invocation of init method failed; 
nested exception is org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type [javax.sql.DataSource] is defined: expected single matching bean but found 2: ds2,ds1
Now, DataSourceInitializer cannot be simply excluded, but it can be turned off by adding...
spring.datasource.initialize=false
into application.properties, but you will get yet another initialization failure...
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration$JdbcTemplateConfiguration': Injection of autowired dependencies failed; 
nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private javax.sql.DataSource org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration$JdbcTemplateConfiguration.dataSource; 
nested exception is org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type [javax.sql.DataSource] is defined: expected single matching bean but found 2: ds1,ds2

...which will force you to exclude DataSourceAutoConfiguration and also DataSourceTransactionManagerAutoConfiguration. Probably not worth it. Rather use @Primary to avoid this madness.

Small tip at last. We had few legacy apps, which were good candidates for Spring Bootification. Here goes typical Spring Boot wrapper I wrote to make upgrade/transition as seamless as possible. It reuses legacy external legacy properties file, which is now well covered in documantation. Part of upgrade was also logging migration to slf4j and logback (SLF4JBridgeHandler)

public final class LegacyAppWrapper {

    static {
        SLF4JBridgeHandler.removeHandlersForRootLogger();
        SLF4JBridgeHandler.install();
    }

    public static void main(String[] args) {
        // Do not do initialization tricks
        System.setProperty(
          "spring.datasource.initialize", "false");

        // Stop script does: curl -X POST http://localhost:8080/shutdown
        System.setProperty(
          "endpoints.shutdown.enabled", "true");

        // Instruct Spring Boot to use our legacy external property file
        String configFile = "file:" + System.getProperty("legacy.config.dir") + "/" + "legacy-config.properties";
        System.setProperty(
          "spring.config.location", configFile);

        ConfigurableApplicationContext context = SpringApplication.run(LegacyAppSpringBootConfig.class, args); // embedded http server blocks until shutdown
    }
}

Happy auto-configuration exclusions

Friday, 18 March 2016

Join unrelated entities in JPA

With SQL, you can join pretty much any two tables on almost any columns that have compatible type. This is not the possible in JPA as it relies heavily on relation mapping.

JPA relation between two entities must be declared, usually using @OnToMany or @ManyToOne or another annotation on some JPA entity's field. But sometimes you cannot simply introduce new relation into your existing domain model as it can also bring all sorts of troubles, like unnecessary fetches or lazy loads. Then ad hoc joining comes very handy.

JPA 2.1 introduced ON clause support, which is useful but not ground breaking. It only allows to use additional joining condition to one that implicitly exists because of entity relation mapping.

To cut story short, JPA specification still does not allow ad hoc joins on unrelated entities, but fortunately both two most used JPA implementation can do it now.

Of course, you still have to map columns (most likely numeric id columns) using @Id or @Column annotation, but you do not have to declare relation between entities to be able to join them.

EclipseLink since version 2.4 - User Guide and Ticket

Hibernate starting with version 5.1.0 released this February - Announcement and Ticket

Using this powerful tool, we can achieve unimaginable.

@Entity
@Table(name = "CAT")
class Cat {

    @Id
    @Column(name = "NAME")
    private String name;

    @Column(name = "KITTENS")
    private Integer kittens;
}

@Entity
@Table(name = "INSURANCE")
class Insurance {

    @Id
    @Column(name = "NUMBER")
    private Integer number;

    @Temporal(TemporalType.DATE)
    private Date startDate;
}

For example to join number of kittens your cat has with your insurance number!
String jpaql="SELECT Cat c JOIN Insurance i ON c.kittens = i.number";
entityManager.createQuery(jpaql);

If you try this with Hibernate version older than 5.1.0 you will get QuerySyntaxException
Caused by: org.hibernate.hql.internal.ast.QuerySyntaxException: Path expected for join!

Happy ad hoc joins!

Monday, 16 November 2015

Upgrading Querydsl 3 to 4

Querydsl is for some time now my weapon of choice when it comes to writing typesafe JPA queries. Main advantage is great readibility of code comparing to standard JPA Criteria API. Sometimes I also use Querydsl SQL module when JPA is not avaliable or JPA entity mapping is too poor (sadly common case) that can't be used in complex queries.

Anyway, today I've been upgrading from Querydsl 3.6.x to latest 4.0.x and I've met numerous API changes. Unfortunately I did not find any migration instructions so here comes my notes. Might be useful to someone.

Simple query

Before:
QEntity qEntity = QEntity.entity;
List<Entity> list = new JPAQuery(emDomain).from(qEntity).list(qEntity);
After:
QEntity qEntity = QEntity.entity;
List<Entity> list = new JPAQuery<Entity>(emDomain).from(qEntity).fetch();

SearchResults -> QueryResults

Before:
SearchResults<Tuple> results = query.listResults(qEntity.id, qEntity.name);
After:
QueryResults<Tuple> results = query.select(qEntity.id, qEntity.name).fetchResults();

Join fetch

Before:
query.leftJoin(qEntity.approvedCreatives).fetch();
After:
query.leftJoin(qEntity.approvedCreatives).fetchJoin();

Single result

Before:
Tuple tuple = query.singleResult(qEntity.id, qEntity.name);
After:
Tuple tuple = query.select(qEntity.id, qEntity.name).fetchOne();

Subqueries

Before:
JPAQuery query = new JPAQuery(emDomain);
ListSubQuery<Long> subQuery = new JPASubQuery().from(qCampaing).where(qCampaing.id.eq(campaingId)).join(qCampaing.segments, qSegment).list(qSegment.id);
query.from(qSegment).where(qSegment.id.in(subQuery));
After:
JPAQuery<Segment> query = new JPAQuery(emDomain);
JPQLQuery<Long> subQuery = JPAExpressions.selectFrom(qCampaing).where(qCampaing.id.eq(campaingId)).join(qCampaing.segments, qSegment).select(qSegment.id);
query.from(qSegment).where(qSegment.id.in(subQuery));

Happy typesafe Querdsling!

Thursday, 27 November 2014

Certificates with SHA-1 and SunCertPathBuilderException

As SHA-1 is heading to deprecation as hashing algoritm for certificate signatures, unpleasant effects start to appear.

Our partners we need to communicate with over HTTPS have brand new certificate signed by GoDaddy Certificate Authority. Accessing their https secured site via browser does not show anything alarming.

But accessing REST endpoint hosted on same site using java HttpUrlConnection blows up with javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

What is going on?

Every internet browser comes with quite a big set of preinstalled Certificate Authorities (CA) trusted certificates, because your browser's vendors trusts them. And because they are CAs and they are trusted, then every certificate that is signed by them is trusted too.

Same story with JVM. There is truststore file inside every JVM named cacerts. In Oracle jdk1.7.0_67 there is 87 Certificate Authorities in it as they are trused by Oracle. GoDady is there too, so why that SunCertPathBuilderException? Let's examine it more closely.

Every JVM is also shipped with command line tool named keytool. Using it you can list and also modify any keystore in jks format such as cacerts Executing... (default password is changeit)

 
keytool -list -v -keystore ${JAVA_HOME}/jre/lib/security/cacerts -storepass changeit | grep -A 14 godaddy
...will print following...
Alias name: godaddyclass2ca
Creation date: 20-Jan-2005
Entry type: trustedCertEntry

Owner: OU=Go Daddy Class 2 Certification Authority, O="The Go Daddy Group, Inc.", C=US
Issuer: OU=Go Daddy Class 2 Certification Authority, O="The Go Daddy Group, Inc.", C=US
Serial number: 0
Valid from: Tue Jun 29 18:06:20 BST 2004 until: Thu Jun 29 18:06:20 BST 2034
Certificate fingerprints:
  MD5:  91:DE:06:25:AB:DA:FD:32:17:0C:BB:25:17:2A:84:67
  SHA1: 27:96:BA:E6:3F:18:01:E2:77:26:1B:A0:D7:77:70:02:8F:20:EE:E4
  SHA256: C3:84:6B:F2:4B:9E:93:CA:64:27:4C:0E:C6:7C:1E:CC:5E:02:4F:FC:AC:D2:D7:40:19:35:0E:81:FE:54:6A:E4
  Signature algorithm name: SHA1withRSA
  Version: 3
Now cmparing to certificate from HTTPS website... GoDaddyG2 cerificate

There is obvious mismatch. Apart from different certificate name, validity date range also notice that Signature Algorithm is "SHA-256 with RSA". GoDaddy's certificate in JVM is different from the one in use on website, therefore SunCertPathBuilderException.

To fix this, we need to add right (G2) GoDaddy's certificate into JVM cacert keystore. Visiting GoDaddy's certificate repository obvious candidate "GoDaddy Class 2 Certification Authority Root Certificate - G2" can be found there.

wget https://certs.godaddy.com/repository/gdroot-g2.crt
keytool -printcert -file gdroot-g2.crt
Will give us something we saw in website certificate...
Owner: CN=Go Daddy Root Certificate Authority - G2, O="GoDaddy.com, Inc.", L=Scottsdale, ST=Arizona, C=US
Issuer: CN=Go Daddy Root Certificate Authority - G2, O="GoDaddy.com, Inc.", L=Scottsdale, ST=Arizona, C=US
Serial number: 0
Valid from: Mon Aug 31 20:00:00 EDT 2009 until: Thu Dec 31 18:59:59 EST 2037
Certificate fingerprints:
  MD5:  80:3A:BC:22:C1:E6:FB:8D:9B:3B:27:4A:32:1B:9A:01
  SHA1: 47:BE:AB:C9:22:EA:E8:0E:78:78:34:62:A7:9F:45:C2:54:FD:E6:8B
  SHA256: 45:14:0B:32:47:EB:9C:C8:C5:B4:F0:D7:B5:30:91:F7:32:92:08:9E:6E:5A:63:E2:74:9D:D3:AC:A9:19:8E:DA
  Signature algorithm name: SHA256withRSA
  Version: 3

Now just import gdroot-g2.crt into JVM cacerts truststore

sudo keytool -import -alias godaddyg2ca -file gdroot-g2.cer -keystore ${JAVA_HOME}/jre/lib/security/cacerts -storepass changeit

Owner: CN=Go Daddy Root Certificate Authority - G2, O="GoDaddy.com, Inc.", L=Scottsdale, ST=Arizona, C=US
Issuer: CN=Go Daddy Root Certificate Authority - G2, O="GoDaddy.com, Inc.", L=Scottsdale, ST=Arizona, C=US
Serial number: 0
Valid from: Tue Sep 01 01:00:00 BST 2009 until: Thu Dec 31 23:59:59 GMT 2037
Certificate fingerprints:
  MD5:  80:3A:BC:22:C1:E6:FB:8D:9B:3B:27:4A:32:1B:9A:01
  SHA1: 47:BE:AB:C9:22:EA:E8:0E:78:78:34:62:A7:9F:45:C2:54:FD:E6:8B
  SHA256: 45:14:0B:32:47:EB:9C:C8:C5:B4:F0:D7:B5:30:91:F7:32:92:08:9E:6E:5A:63:E2:74:9D:D3:AC:A9:19:8E:DA
  Signature algorithm name: SHA256withRSA
  Version: 3

Extensions: 

#1: ObjectId: 2.5.29.19 Criticality=true
BasicConstraints:[
  CA:true
  PathLen:2147483647
]

#2: ObjectId: 2.5.29.15 Criticality=true
KeyUsage [
  Key_CertSign
  Crl_Sign
]

#3: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 3A 9A 85 07 10 67 28 B6   EF F6 BD 05 41 6E 20 C1  :....g(.....An .
0010: 94 DA 0F DE                                        ....
]
]

Trust this certificate? [no]:  yes
Certificate was added to keystore

Problem solved and REST call should succeed from now using this JVM

For completeness sake, if you want to get rid of it, execute

keytool -delete -alias godaddyg2ca -keystore ${JAVA_HOME}/jre/lib/security/cacerts

What in case you are not allowed to modify JVM cacerts truststore?

Then make a copy of it, import gdroot-g2.cer into it and use this custom truststore instead of default JVM truststore using -Djavax.net.ssl.trustStore=/path/to/custom_cacerts -Djavax.net.ssl.trustStorePassword=changeit java parameters

What in case you need multiple keystores or something similarily complex?

Such scenarios cannot be solved simply by using JVM switches and parameters anymore and you have to roll your own X509TrustManager implementation. Then you need to plug it into your http client connection setup - (HttpsUrlConnection SSLSocketFactory) (Apache HttpClient 3 SecureProtocolSocketFactory) (Apache HttpClient 4 SSLConnectionSocketFactory ) (Jersey SslConfigurator)

Monday, 3 November 2014

Airbrake for logback

I've you've been observing this ticket for a while and it seems to be pretty ignored. Well not anymore or at least not by me. Undramatic sources of Airbrake Logback Appender are in GitHub airbrake-logback repo

Grab it from Maven central repo

<dependency>
    <groupId>net.anthavio</groupId>
    <artifactId>airbrake-logback</artifactId>
    <version>1.0.0</version>
</dependency>

Use it...well...as usual

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false" scan="true" scanPeriod="30 seconds">

    <appender name="AIRBRAKE" class="net.anthavio.airbrake.AirbrakeLogbackAppender">
        <apiKey>YOUR_AIRBRAKE_API_KEY</apiKey>
        <env>test</env>
        <enabled>true</enabled>

        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>ERROR</level>
        </filter>
    </appender>

    <root>
        <level value="info" />
        <appender-ref ref="AIRBRAKE" />
    </root>
</configuration>

Happy Logback based Airbraking