Making JavaEE ecosystem work: How we configured our Wildfly AS environment for Development and Production.

When I embarked on the journey to learn JavaEE and create enterprise applications that were strictly JavaEE compliant (i.e., it used JavaEE API and resources first), there were very few Application Servers that were "kinda" Java EE compliant. We had (back then) IBM WebSphere Application Server, BEA WebLogic (now owned by Oracle), Oracle Application Server. In the Open Source world, the well known ones then was JBoss Application Server. I started using JBoss when it was version 3.x up to JBoss AS 6.x, until I discovered Wildfly AS.

When Wildfly was released, it promised to start up in under 7 seconds and they (RedHat) released a fully JavaEE compliant Application Server. That was good enough for me. Over the years working with the system, I architect the environment that we used currently in Development and Production using Wildfly AS in Cluster mode.

This is our setup of Wildfly in a Wildfly cluster (in domain mode):

In our development environment, we have the following server setup (the names are for example purposes, and doesn't reflect the actual names of our servers):
  • WSDEPLOYMENTAS (Master server)
  • WFDEVAS (Development AS)
  • WFQAAS (QA/UAT AS)
  • WFPREPRODAS1 (Pre-Production AS 1)
  • WFPREPRODAS2 (Pre-Production AS 2)
In production, we have the same setup as the development environment, but with a twist:
  • WFPRODMASTER (Master)
  • WFPROD1
  • WFPROD2
  • WFPRODN...
  • WFDR1
  • WFDR2
  • WFDRN
I prepended the names with "WF" (for Wildfly) because we have other JavaEE servers setup as well, which I will discuss in the next blog.

You will see that in the Production environment, there are PROD servers and DR servers. The DR servers are Disaster Recovery servers. More of it later.

There is only 1 master server and all other servers are slaves.

Our applications are deployed as follows: We create a server group for the application, per environment, then assign a server to the server group. Each application/module has its own server group and server assigned. The reasons are straightforward for us. We wanted to have a server log file per application and should one application fail, the other servers aren't affected. Even Adam Bien suggested that approach too!

Example: Suppose you have an application WAR called doomsday.war and you wanted to deploy it to all your environment, up to pre-prod for business to test. The setup would be as follows (note, this can only be done in the Admin console in the Master server only. The slave servers don't have an admin console):
  • doomsday-dev-server-group
    • doomsday-dev-server
      • WFDEVAS (Let's say port offset is 0 since it's the very first application dev server).
  • doomsday-qa-server-group
    • doomsday-qa-server
      • WFQAAS (Let's say port offset is 0 since it's the very first application QA server).
  • doomsday-preprod-server-group
    • doomsday-preprod-server
      • WFPREPRODAS1 (Let's say port offset is 0 since it's the very first application Pre Prod server).
Should we add another application to the development environment, we would have the same setup as above, but change the port offset. What does port offset entails? It helps us to run our application server in different port. Let's say that the context path for doomsday.war is /doom with a port offset of 0. Our URL in dev, once deployed will be http://WFDEVAS:8080/doom/. If the port offset was 1, then we add 8080 + 1 = 8081. Wildfly creates an application server on the physical server when one setups a server. In this case, on the physical WFDEVAS server, there is a doomsday-dev-server created that the Wildfly runs, assigning a JVM to it, on port 8080. No 2 application servers can run on the same port.

In development environment, why 2 Pre-Production servers? We wanted to balance the amount of application servers since we wanted business peace of mind to test their applications without "interruptions". The heavy applications don't sit on the same physical servers. We distribute it in a way that will make substantial performance impact for business.

Production is different. Though we have the same setup as the development environment, we have PROD and DR servers. DR servers is our Disaster Recovery server. When all our application server fails, DR server automatically kicks in. In Wildfly, this can be configured using Wildfly HA but since we had an environment with HA Proxy setup already, why not use it?

Taking the same example: Doomsday is now production ready, our setup will be as follows:
  • doomsday-prod-server-group
    • doomsday-prod-server
      • WFPROD1 (Let's say port offset is 0 since it's the very first application Prod server).
      • WFPROD2 (Let's say port offset is 0 since it's the very first application Prod server).
      • WFDR1 (Let's say port offset is 0 since it's the very first application DR server).
Our HA Proxy is designed to delegate our HTTP requests either to WFPROD1 or WFPROD2. Should those 2 servers becomes unavailable, an emergency routing is done to WFDR1. 

All of this setup is done through the Wildfly Admin console in the Master Server and deployment is done by assigning a WAR file to the server and Wildfly will automatically deploy it to all physical server assigned to it. In our case, doomsday.war is assigned to "doomsday-prod-server" and it will deploy it to all 3 servers assigned to it. Once it's completed, Wildfly will notify me (in a green popup message) that deployment was successful.

So far we've had good success with this setup. When creating JDBC datasources, Wildfly assigns ALL datasources to all its slave servers. One cannot simply assign a datasource to a specific server, and I understand why they took that approach.

I hope this helps those who wants to create a Wildfly cluster environment. Let me know what approach you're employed in all your environments and how you are using Wildfly in a clustered environment and let's help make a JavaEE world a better place. :-)


Comments