Friday, 11 April 2014

Spring MockMvc tests

MVC Test Framework arrived with Spring 3.2. It allows to write integration tests (well... almost) for your Spring MVC @Controller(s)

One of server-side cornerstones is MockMvc class. It allows to execute requests on your Controllers very easily, but it needs to be initialized before it is used. Threre are two ways to initialize MockMvc and choice depends on how broad integration with other Spring MVC components your tests need. In my case, it was custom view resolver and few others I removed for sake of simplicity.

First is meant for simple single Controller testing

    @Test
    public void foo() {
        TestedController controller = new TestedController();
        MyCustomViewResolver resolver = new MyCustomViewResolver();
        MockMvc mockMvc = MockMvcBuilders.standaloneSetup(controller).setViewResolvers(resolver).build();

        MvcResult result = this.mockMvc.perform(MockMvcRequestBuilders.get("/foo")).andReturn();
        //parse and assert response.getContentAsString() as view was resolved by MyCustomViewResolver
    }
You can employ extensive set of spring-mvc usual suspects, like ConversionService, ViewResolvers, MessageConverters, ... see StandaloneMockMvcBuilder javadoc

All this manual assembling makes me feel little unconfortable. Normaly all those beans are wired together in Spring Dispatcher context.

Another way to initlialize MockMvc is using @WebAppConfiguration and MockMvcBuilders.webAppContextSetup(webAppContext)

@RunWith(SpringJUnit4ClassRunner.class)
@WebAppConfiguration
@ContextConfiguration(classes = TestMvcSpringConfig.class)
public class FooTests {

    @Autowired
    private WebApplicationContext webAppContext;

    private MockMvc mockMvc;

    @Before
    public void setup() {
        this.mockMvc = MockMvcBuilders.webAppContextSetup(webAppContext).build();
    }

    @Test
    public void foo() {
        MvcResult result = this.mockMvc.perform(MockMvcRequestBuilders.get("/foo")).andReturn();
        MockHttpServletResponse response = result.getResponse();
        //parse and assert response.getContentAsString() as view was resolved by MyCustomViewResolver
    }

    @EnableWebMvc
    @Configuration
    public static class TestMvcSpringConfig extends WebMvcConfigurerAdapter {

        @Override
        public void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer) {
            configurer.enable();
        }

        @Bean
        public TestedController getController() {
            return new TestedController();
        }

        @Bean
        public ViewResolver viewResolver() {
            MyCustomViewResolver resolver = new MyViewCustomResolver();
            return resolver;
        }
    }

For some unknown reason (that disappeared as quickly as it appeared) Spring was failing to create @EnableWebMvc annotated @Configuration, complaining about missing servlet context. I had to get my hands dirty and roll my own Spring web context. So for completeness, here is how it was done using MockServletContext. May be useful sometime.


    @Test
    public void test() throws Exception {
        MockServletContext servletContext = new MockServletContext();

        AnnotationConfigWebApplicationContext springContext = new AnnotationConfigWebApplicationContext();
        springContext.setServletContext(servletContext);
        springContext.register(TestMvcSpringConfig.class);
        springContext.refresh();

        MockMvc mockMvc = MockMvcBuilders.webAppContextSetup(springContext).build();

        MvcResult result = mockMvc.perform(MockMvcRequestBuilders.get("/foo")).andReturn();
    }

Happy Spring MVC testing!

Thursday, 10 April 2014

RPM upgrade and embedded Jetty

Java web application I'm working on recently (let's name it lobster) is embedding and packaging Jetty inside itself. This makes application artifact more self-contained and independent, because it does not require preinstalled and preconfigured server.

To simplify application deployment even more, it is packaged as RPM file, which is uploaded into Nexus serving as Yum repository.

Deployment then is simple matter of executing sudo yum install lobster and thanks to RPM scriptlets, server is automaticaly restarted as a part or installation.

Everything went nice and smoothly, until second release. When yum update happend, we have found how amazingly wierd thing RPM upgrade is.

sudo yum update lobster execution completed, but application failed to start and logfile contained strange exceptions
Caused by: java.io.FileNotFoundException: /opt/lobster/lib/lobster-core-0.4.0.jar (No such file or directory)
 at java.util.zip.ZipFile.open(Native Method) ~[na:1.7.0_40]
 at java.util.zip.ZipFile.(ZipFile.java:215) ~[na:1.7.0_40]
 at java.util.zip.ZipFile.(ZipFile.java:145) ~[na:1.7.0_40]
 at java.util.jar.JarFile.(JarFile.java:153) ~[na:1.7.0_40]
 at java.util.jar.JarFile.(JarFile.java:90) ~[na:1.7.0_40]
 at sun.net.www.protocol.jar.URLJarFile.(URLJarFile.java:93) ~[na:1.7.0_40]
 at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:69) ~[na:1.7.0_40]
 at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:99) ~[na:1.7.0_40]
 at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122) ~[na:1.7.0_40]
 at sun.net.www.protocol.jar.JarURLConnection.getJarFile(JarURLConnection.java:89) ~[na:1.7.0_40]

What was wierd even more, that it was lobster version 0.5.0 installation and 0.4.0 stated in stacktrace, actualy was previous version!

To make story short, after some googling I've found RPM upgrade sequence.

  1. execute new version %pre [ $1 >= 2 ]
  2. unpack new version files (both old and new are now mixed together)
  3. execute new version %post [ $1 >= 2 ]
  4. execute old version %preun [ $1 == 1 ]
  5. remove old files (only new and unchanged stays)
  6. execute old verion %postun [ $1 == 1 ]

Important is, that between step 2 and 5, mixture of old and new version jar files is present in installation directory! Attempt to start server java process in %post or %preun scriptlet can only result in disaster same as we experienced. Old version jars will be deleted right after scriptlet execution.

Here comes working install/upgrade/uninstall solution

%pre scriptlet

#!/bin/sh
# rpm %pre scriptlet
#
# parameter $1 means
# $1 == 1 ~ initial installation
# $1 >= 2 ~ version upgrade
# never executed for uninstall

echo "rpm: pre-install $1"

# failsafe commands - can't break anything

# make sure that user exist
id -u lobster &>/dev/null || useradd lobster

# make sure that application is not runnig
if [ -f /etc/init.d/lobster ]; then
 /sbin/service lobster stop
fi

%post scriptlet

Important is NOT to start application on rpm upgrade
#!/bin/sh
# rpm %post scriptlet
#
# parameter $1 means
# $1 == 1 ~ initial installation
# $1 >= 2 ~ version upgrade
# never executed for uninstall

echo "rpm: post-install $1"

# initial install
if [ "$1" -eq "1" ]; then
 /sbin/chkconfig --add lobster
 /sbin/service lobster start
fi

%preun scriptlet

#!/bin/sh
# rpm %preun scriptlet
#
# parameter $1 means
# $1 == 0 ~ uninstall last
# $1 == 1 ~ version upgrade
# never executed for first install

echo "rpm: pre-uninstall $1"

# uninstall last
if [ "$1" -eq "0" ]; then
 /sbin/service lobster stop
 /sbin/chkconfig --del lobster
fi

%postun scriptlet

Here is right place to start application on rpm upgrade
#!/bin/sh
# rpm %postun scriptlet
#
# parameter $1 means
# $1 == 0 ~ uninstall last
# $1 == 1 ~ version upgrade
# never executed for first install

# console output is surpressed for postun% !
echo "rpm: post-uninstall $1"

# upgrade
if [ "$1" -ge "1" ]; then
 /sbin/service lobster start
fi

Useful RPM scriptlet documentation is in Fedora Wiki also how to integrate with SysV Init Scripts

Happy RPM deployments!

Monday, 24 March 2014

Spring Security with multiple AuthenticationManagers

Spring security (Acegi security once) configuration was for a long time quite exhausting task. Gluing together all filters, entry points, success handlers and authentication providers was not small price to pay for overwhelming flexibility of this awesome framework.

Starting with version 2.0, simplified namespace configuration (<sec:http/>) was introduced, allowing most common setups to be configured with just a few lines. Not much get changed with version 3.0.

However, this new configuration style introduced also one important limitation - only single one security filter chain with single AuthenticationManager can be configured using it. When you happen to have web application with two faces - Web Pages and REST endpoints, having quite different authentication requirements, you are in trouble. Only option is fallback to traditional bean by bean configuration.

Following gist shows how it can be done

What a massive piece of xml!!!

Limitation of single security filter chain was removed in version 3.1, which allowed to have multiple <http authentication-manager-ref="..."/> elements, each possibly with different AuthenticationManager.

Latest and greatest version 3.2 brought long time awaited Java Configuration, with sweet @EnableWebSecurity and WebSecurityConfigurerAdapter combo. To do not repeat same mistake again, this funny trick can be used to define multiple filter chains and AutheticationManagers.

But as an exercise, I also tried to disassemble configuration into old school bean by bean way. Let's call that poor man's spring security java config.

Following gist shows how it can be done

While this is not as complex setup as xml based example before, it is still big chunk of code.

Happy Spring securing!

Monday, 3 March 2014

Logback with rsyslog

In company I work for, developers don't have any access to Live servers. While it is good from the responsibility division reasons, it makes problem and error tracking painful. Most often we need access to log files without asking somebody from operations team every single time.

To overcome this, we went predictable way of setting up company-wide log aggregation server. While there is plethora of solutions, one is quite easy to setup if you are on Unix/Linux - rsyslog It is widely used to provide standard syslog feature so there is good chance that you have it installed already.

I will not talk much about rsyslog server. It listens usualy on UDP port 514 and has configuration file /etc/rsyslog.conf. Excelent article about installation and basic configuration is here.

Any (client host) rsyslog can be configured to forward it's syslog messages into server rsyslog. This might be right way, when you develop native or RoR application and you log into local syslog, but Java world has it's well established logging libraries, namely Logback or Log4j. Both of them provide appanders capable of sending log events into rsyslog server, while logging into local files as usualy.

Anyway, if you are already logging directly into local syslog, and you need to forward messages into log server, just add to the end of /etc/rsyslog.conf file.

*.* @logs.mycompany.com:514
You can use DNS name or IP addres of your rsyslog server. 514 is default rsyslog port.

To log from java directly to rsyslog server, skipping local syslog completely, add SyslogAppender into logback.xml.
Logback does not provide explicit configuration for TAG aka APP-NAME aka $programname, but it can be emulated by using special syntax of the first word of the log message.
It must start with ' ' (whitespace) and end with ':' (colon) See ' mycoolapp:' in suffixPattern element value. Logback configuration feature ticket exist, but it seems to be abandoned. I don't use Log4J, so here is just a link to article about Log4j and syslog. But I don't think it covers TAG usage.

Back on server, log messages from muliple hosts and applications are written, by default, into single /vsr/log/messages file, making it quite messy. Fortunately rsyslog provides easy way of routing messages coming from different hosts or applications into different files.

You can split messages using hostname or ip address they are coming from, as described here or you can split them on log message content, as described here. You can throw them away too, as described here.

What we choose, is routing messages belonging to same application into single log file. Messages are coming from different hosts because application runs in cluster. Every log line contains hostname, so origin still can be recognized.
if $programname == 'mycoolapp' then /var/log/mycoolapp/mycoolapp.log
To make this work, client must send TAG ($programname) field correctly as described above in logback.xml

Happy rsyslogging

Wednesday, 26 February 2014

Quartz on H2 with MODE=MYSQL

We are using embedded H2 database for local developement, but application, when deployed, uses MySQL. H2 database has cool compatibility mode feature allowing it to mimic another RDBMS so you can use it's non standard SQL statements. Level of compatibility is not 100% of course.

While Hibernate runs nicely on H2 with MySQLInnoDBDialect configured, trouble starts when you'll try run Quartz Scheduler (2.2.1) on H2 even in MYSQL mode. You'll face exceptions like...

org.quartz.JobPersistenceException: Couldn't store job: Value too long for column "IS_DURABLE VARCHAR(1) NOT NULL": "'TRUE' (4)"; SQL statement:
INSERT INTO QRTZ_JOB_DETAILS (SCHED_NAME, JOB_NAME, JOB_GROUP, DESCRIPTION, JOB_CLASS_NAME, IS_DURABLE, IS_NONCONCURRENT, IS_UPDATE_DATA, REQUESTS_RECOVERY, JOB_DATA)  VALUES('booster-scheduler', ?, ?, ?, ?, ?, ?, ?, ?, ?) [22001-175]
 at org.quartz.impl.jdbcjobstore.JobStoreSupport.storeJob(JobStoreSupport.java:1118) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.jdbcjobstore.JobStoreSupport$3.executeVoid(JobStoreSupport.java:1090) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.jdbcjobstore.JobStoreSupport$VoidTransactionCallback.execute(JobStoreSupport.java:3703) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.jdbcjobstore.JobStoreSupport$VoidTransactionCallback.execute(JobStoreSupport.java:3701) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.jdbcjobstore.JobStoreCMT.executeInLock(JobStoreCMT.java:245) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.jdbcjobstore.JobStoreSupport.storeJob(JobStoreSupport.java:1086) ~[quartz-2.2.1.jar:na]
 at org.quartz.core.QuartzScheduler.addJob(QuartzScheduler.java:969) ~[quartz-2.2.1.jar:na]
 at org.quartz.core.QuartzScheduler.addJob(QuartzScheduler.java:958) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.StdScheduler.addJob(StdScheduler.java:268) ~[quartz-2.2.1.jar:na]
 at org.springframework.scheduling.quartz.SchedulerAccessor.addJobToScheduler(SchedulerAccessor.java:342) ~[spring-context-support-4.0.1.RELEASE.jar:4.0.1.RELEASE]
 at org.springframework.scheduling.quartz.SchedulerAccessor.addTriggerToScheduler(SchedulerAccessor.java:365) ~[spring-context-support-4.0.1.RELEASE.jar:4.0.1.RELEASE]
 at org.springframework.scheduling.quartz.SchedulerAccessor.registerJobsAndTriggers(SchedulerAccessor.java:303) ~[spring-context-support-4.0.1.RELEASE.jar:4.0.1.RELEASE]
 at org.springframework.scheduling.quartz.SchedulerFactoryBean.afterPropertiesSet(SchedulerFactoryBean.java:514) ~[spring-context-support-4.0.1.RELEASE.jar:4.0.1.RELEASE]
 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1612) ~[spring-beans-4.0.1.RELEASE.jar:4.0.1.RELEASE]
 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1549) ~[spring-beans-4.0.1.RELEASE.jar:4.0.1.RELEASE]
 ... 56 common frames omitted
Caused by: org.h2.jdbc.JdbcSQLException: Value too long for column "IS_DURABLE VARCHAR(1) NOT NULL": "'TRUE' (4)"; SQL statement:
INSERT INTO QRTZ_JOB_DETAILS (SCHED_NAME, JOB_NAME, JOB_GROUP, DESCRIPTION, JOB_CLASS_NAME, IS_DURABLE, IS_NONCONCURRENT, IS_UPDATE_DATA, REQUESTS_RECOVERY, JOB_DATA)  VALUES('booster-scheduler', ?, ?, ?, ?, ?, ?, ?, ?, ?) [22001-175]
 at org.h2.engine.SessionRemote.done(SessionRemote.java:589) ~[h2-1.3.175.jar:1.3.175]
 at org.h2.command.CommandRemote.executeUpdate(CommandRemote.java:186) ~[h2-1.3.175.jar:1.3.175]
 at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:154) ~[h2-1.3.175.jar:1.3.175]
 at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:140) ~[h2-1.3.175.jar:1.3.175]
 at com.jolbox.bonecp.PreparedStatementHandle.executeUpdate(PreparedStatementHandle.java:205) ~[bonecp-0.8.0.RELEASE.jar:na]
 at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.insertJobDetail(StdJDBCDelegate.java:624) ~[quartz-2.2.1.jar:na]
 at org.quartz.impl.jdbcjobstore.JobStoreSupport.storeJob(JobStoreSupport.java:1112) ~[quartz-2.2.1.jar:na]
 ... 70 common frames omitted

Reason behing this is that current H2 version (1.3.155) is unable to perform automatic conversion from boolean into VARCHAR(1) like MySQL can do. There is recent discussion about implementing this, but until released, you can make it work simply by changing VARCHAR(1) into BOOLEAN inside Quartz schema tables_mysql_innodb.sql.

Following gist shows result

We can only guess why Quartz guys are sticking with VARCHAR(1) when BOOLEAN makes much more sense.

Happy scheduling

Friday, 17 January 2014

Openshift System properties and Environment variables

For cloud deployable application, it is advisable to create them as much self-contained as possible. If application allows some configuration, these settings should be packaged inside war file and used as default values, while value override mechanism is provided.

Linux Environment variables

Unfortunately, following (usual) way of setting environment variable will not work on Openshift.
ctl_app stop jbossews
export MY_ENV_VAR="my_env_var_value"
ctl_app start jbossews
To be precise, JAVA_OPTS_EXT variable set like this, will not visible to java process running your application. To set it properly, you have to do it using Openshift rhc tool or using shortcut
echo "my_env_var_value" > ~/.env/user_vars/MY_ENV_VAR
An now you can in your webapp get variable value
  String my_env_var = System.getenv("MY_ENV_VAR"); //"my_env_var_value"
  if (my_env_var == null) {
    my_env_var = "Default value..."; //I told you not to depend on external configuration and provide default values...
  }

Java System Properties

System Properties are most commonly used to pass parameters into Java application. They are specified as java "-D" prefixed command-line parameters:

java -Dmyapp.param=whatever ...
then they can be obtained inside application
String param = System.getProperty("myapp.param"); //"whatever"

But Openshift managed servers are started using ctl_app start ... (or gears start ...) command and we are not directly in control of executing java so we can't add "-D" parameters.

Trick is to use JAVA_OPTS_EXT evironment variable. Since we know how to set Openshift Environment variable, it is just matter of:

echo "-Dmyapp.param=myapp.value" > .env/user_vars/JAVA_OPTS_EXT
...then restart your server...
ctl_app stop jbossews
ctl_app start jbossews
...and enjoy fruits of your effort (look for -Dmyapp.param=myapp.value)...
[wotan-anthavio.rhcloud.com 52d184e45973ca0bc0000088]\> jps -mlv
29444 sun.tools.jps.Jps -mlv -Dapplication.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45 -Xms8m
24633 org.apache.catalina.startup.Bootstrap start -Xmx256m -XX:MaxPermSize=102m -XX:+AggressiveOpts -DOPENSHIFT_APP_UUID=52d184e45973ca0bc0000088 -Djava.util.logging.config.file=/var/lib/openshift/52d184e45973ca0bc0000088/jbossews//conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.net.preferIPv4Stack=true -Dfile.encoding=UTF-8 -Djava.net.preferIPv4Stack=true -Dmyapp.param=myapp.value -Dcatalina.base=/var/lib/openshift/52d184e45973ca0bc0000088/jbossews/ -Dcatalina.home=/var/lib/openshift/52d184e45973ca0bc0000088/jbossews/ -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/lib/openshift/52d184e45973ca0bc0000088/jbossews//tmp

Wednesday, 15 January 2014

Deploy to Openshift from Github repository

On Cloudbees RUN@Cloud or Jelastic cloud PaaS, it is simple to build application war on your devbox and then use provided web interface to simply upload and deploy it.

Openshift does not offer such deployment option right away, and similarily to Heroku it encourages you to do builds and deployments via git push commit hook. First I'll do deployment to Openshift encouraged and documented way, but because it actually is possible to deploy externaly built war to Openshift, I'll describe how to perform it too.

I have existing quite simple, Maven built, multimodule application Wotan hosted in Github repository. One of it's submodules - wotan-browser is web application packaged as war. Plan is to deploy this webapp to Openshift.

Create Account and some Gears

Openshift account comes with 3 free Gears (simultaneously running JVMs). I've signed in and then created Gear named "wotan" with "Tomcat 7 (JBoss EWS 2.0)" cartridge. Same can be done using rhc app create wotan jbossews-2.0 command I also added "Jenkins Client" cartridge, which creates Jenkins instance that occupies second Gear. To execute Jenkins build, third and last free Gear is required. Since that escalated quickly, I'll later show how to build web application without Jenkins to save two Gears

In order to simplify authentication, I've configured same ssh public key I'm using for Github on Settings page. I strongly encourage you to do it as well. If it happend that you do not have any ssh key, create one using ssh-keygen or puttygen if you are on Windows.

What Openshift provides

Openshift gives you real linux system account, which is nice. It's big difference comparing to other cloud platforms, where you have only limited interface (web or SDK) to interact with your server and applications.
You can login into...
ssh 52d184e45973ca0bc0000088@wotan-anthavio.rhcloud.com
...look around for a while...
[wotan-anthavio.rhcloud.com 52d184e45973ca0bc0000088]\> ls -gG
total 20
drwxr-xr-x.  4 4096 Jan 14 15:19 app-deployments
drwxr-xr-x.  4 4096 Jan 11 12:52 app-root
drwxr-xr-x.  3 4096 Jan 11 12:53 git
drwxr-xr-x. 13 4096 Jan 11 12:53 jbossews
drwxr-xr-x.  8 4096 Jan 11 12:57 jenkins-client
...but most interesting directories are...
  • jbossews - Server from Tomcat 7 (JBoss EWS 2.0) cartridge. This is ultimate place where webapps and logs will be. If you have picked another cartridge type for your application, your directory will be different of course
  • git - Git repository with default simple web application
Git repository comes prepopulated with default simple web application. I'm gonna replace it with my Github repository content, but it is worth to at least check it out and look how the vanilla pom.xml looks like.
git clone ssh://52d184e45973ca0bc0000088@wotan-anthavio.rhcloud.com/~/git/wotan.git/
Quite important is .openshift directory because it contains various Openshift metadata and configuration. See documentation page

Default simple application is also deployed on Gear creation and Jenkins instance with preconfigured build Job You can start new build and see on Applications page, how one Gear will become occupied by build process and disappears after while. It is also useful to examine build console output to see what precisely get executed and how - java version, maven parameters, etc...

Deploy using Openshift Jenkins

On my devbox I have my Github repo checked out (On Mac OSX and Windows7 PC with Cygwin)

git clone git@github.com:anthavio/wotan.git
Now merge Github repo into Openshift repo. Steps are taken from stackoverflow so I'll just list commands

git remote add openshift -f ssh://52d184e45973ca0bc0000088@wotan-anthavio.rhcloud.com/~/git/wotan.git/
git merge openshift/master -s recursive -X ours
git push openshift HEAD

Every git push openshift HEAD starts new Jenkins build. It fails at first because pom.xml is not ready for Openshift and tweaks has to be done. Let's examine these tweaks.

Maven pom.xml changes for Openshift

Whole pom.xml can be found here
  • To use dependencies not present in Maven Central repository, additional repository (sonatype-oss-public) has been added
  • You might seen in vanilla Openshift pom.xml slightly changed maven-war-plugin configuration. Contract for deployment is that all deployable artifacts must be present in webapps directory and have nice file name, because this name will became part of url
  • Turn off maven-enforcer-plugin - this might not be necessary for most of developers. I enforce conservative Java6, but Openshift runs on Java7

Skip OpenShift git repository

Actually you need not to use Openshift git repository at all. Just go to the Jenkins Job configuration and change Git repository URL from ssh://52d184e45973ca0bc0000088@wotan-anthavio.rhcloud.com/~/git/wotan.git to use directly Github repository https://github.com/anthavio/wotan.git and forget Openshift git repo forever...

Deploy pre-built war

Now I'll use another web application of mine - Disquo. I'll build war on my devbox and deploy it manualy. No Jenkins involved.
# build war localy
git clone git@github.com:anthavio/disquo.git /tmp/disquo.git
cd /tmp/disquo.git
mvn clean package -Dmaven.test.skip=true

# upload assembled war
scp disquo-web/target/disquo-web-1.0.0-SNAPSHOT.war 52d184e45973ca0bc0000088@wotan-anthavio.rhcloud.com:~/app-root/repo/webapps/disquo.war
ssh 52d184e45973ca0bc0000088@wotan-anthavio.rhcloud.com
# stop tomcat & deploy webapp & start tomcat
ctl_app stop jbossews
cp app-root/repo/webapps/disquo.war jbossews/webapps
ctl_app start jbossews

# Check Tomcat log file
tail -f jbossews/logs/catalina.out
Deploying this way has nice effects
  • Jenkins gear and building gear are not required, can be removed and all 3 gears are avaliable for running applications
  • Multiple applications can be deployed into single Tomcat gear
Happy Openshift deployments folks!