Migration of Java web application from Google App Engine to IBM BlueMix

A couple of years ago I wrote an app that runs on the Google App Engine (GAE). This month I had to add a couple of new features to the app and decided to port it to IBM’s Bluemix. This article describes my migration experience and the next article will provide a comparison between Google App Engine and IBM Bluemix.

The reason I originally chose GAE was that in 2012 it provided a free hosted Java runtime (PaaS style) for up to 5 million requests per month (current pricing seems to have changed a bit). At that time I could not find a free hosted IBM WebSphere App Server instance, so I went with GAE and its Jetty Servlet engine. In 2014 IBM has made Bluemix available with a free tier providing 375 GB hours per month for free (meaning you can run a JVM with 512MB heap non-stop at all times), plus IBM provides free tiers for database and many other services.

The App

resultsFirst let me explain the app and why I wrote it. I am lazy (in a good way) and I try to avoid doing things that I do not *have* to do. I guess this is not unlike most people. Especially I do not like doing the same thing twice when I know it could be automated. Where am I going with this you might ask? USA Triathlon runs a winter competition for athletes (every year from Dec 1st until Feb 28) – it is all done via the Internet and people compete by entering the amount of swim, bike and run miles they did in training every day. This competition is called the USAT National Challenge Competition (NCC). The goal is to motivate folks to maintain an active lifestyle over the cold months of winter. People must enter their training miles into the USAT website and can see where they are relative to other athletes (see picture for an example of the first few days of the competition this year).

The problem is that most athletes already enter their training swim/bike/run miles into the special sites, and the most popular one is Trainingpeaks.com. In fact, I do not even enter my miles manually. I just upload my workouts right from my Garmin GPS watch and it automatically adds them to the trainingpeaks website where I can do all sorts of analysis on my data (actual vs. planned workouts, miles, heart rates, paces, hours, training load, etc.). The idea of entering my training miles by hand into yet another website was not very appealing, so I decided to write a program that would automatically copy my data from one website to another. Sounds familiar? Yes, this is what I do for work. I sell IBM WebSphere integration software.

I decided to create a program that would automatically replicate my data from one site to another and if I made it public other folks would use it too. After all, there were 4,000 plus people participating in the competition in the winter of 2012-2013. I am guessing about half of those folks are using Trainingpeaks.com. Assuming 4 minutes per person per day entering data manually into the USAT site, times 2,000 people, times 3 months, times 30 days, divided by 60 minutes in 1 hour: 4*2,000*3*30/60=12,000 hours. WOW. If I made this application available on Dec 1, I could have saved 12,000 hours of triathlete’s time (about 6 hours per person for entire winter). This was definitely worth a try! You can read about my design decisions and technical aspects of the application on my personal blog.

Here is the live app that I migrated and it is running on IBM Bluemix today (click to run it, but you wont get much out of it if you do not have passwords for NCC and trainingpeaks websites):


Complete source code, shell scripts and server configuration file for the Tri-Replicator application can be found on GitHub.

Development tools

When I built the original app for Google App Engine back in 2012 I used the Eclipse IDE for Java EE developers with the Google plugin for Eclipse. With this setup I can build and test my app locally on my laptop and deploy into the GAE with one click, provided I have created a GAE account. A description of this setup is outside of the scope of this article, but you can use this excellent tutorial found on the Vogella site. By the way, Lars Vogel has many excellent tutorials of a very high quality. I wish he worked for IBM to work full time on tutorials for WebSphere :-).

To perform the migration I used the Eclipse IDE for Java EE developers (I did a fresh install of Eclipse Luna to keep things separate from GAE – just in case), but this time with a free IBM Eclipse plugin for Liberty Profile and Liberty Runtime (I used the latest beta version – latest as of November 2014, but it makes no difference if you use the generally available version of Liberty Installation is easy and you can watch a demo on how to get started with Eclipse and Liberty in this post – the demo is a year old, but the beta install steps are the same as shown in the video.

When migrating my application I used a local instance of WebSphere Application Server Liberty profile – anyone can use the Liberty runtime and IBM Eclipse plugin for Liberty for free for development without any restrictions. All development and testing was done locally and the final application was deployed into Bluemix for production – also for free.

Since I planned on using the SQL DB on Bluemix (really just a cloud version of DB2), I needed to use a local database for development of entity objects. I decided to use Apache Derby – it is very small with a 1 minute unzip and install and provides a decent developer experience with full SQL support for what I needed to do. Alternatively I could have used a free DB2 Express install.

Project setup

Once you have your Eclipse IDE setup with the Liberty plugin, it is time to start working on the source code.

Originally I planned on importing the entire project all at once into the Eclipse workspace and clean it up, change properties, project facets, build paths, etc. However this proved to be too much and after a couple of hours fiddling with settings, etc. I still could not start and test my project. That is when I decided on a different approach.

What worked best for me was to advance in baby steps. Get one thing working (HelloWorld style), then migrate that part of the app, then another, etc. At a high level here is the sequence of steps that it took to finish the migration:

  1. Create the new Eclipse Workspace and a brand new “Tri-Replicator” Dynamic Web 3.0 project targeting the Liberty runtime.
  2. Create the simple HelloWorld Servlet to make sure things work in a local Liberty test instance.
  3. Configure server.xml with the proper datasource for the local Derby database.
  4. Create persistence.xml file to describe JPA properties (original app was using DataNucleus JPA/JDO running on Jetty).
  5. Add HelloWorld JPA entity (just to test JPA), configure JPA 2.0 Facet for my project based on EclipseLink.
  6. Test that JPA works with the local Liberty test server and local Derby.
  7. Configure my Bluemix account and deploy this little HelloWorld app to make sure that no code changes are required to make my app run on Bluemix with SQL DB.
  8. Back to my local Eclipse workspace – add all of the JPA entities and database related classes from my original project, make any annotation adjustments if needed and test all database related stuff.
  9. Similar to baby steps above, keep adding slices of functionality to the project and test them as they work (I had a few test cases written for JUnit, but not enough…). In my app there was a good layered architecture, so I could easily test lower levels independently from having to make the entire application work all at once (which would have been very hard). Therefore I repeated steps above to migrate functions related to Logging, Servlets, REST, JAXB, HTMLunit, Encryption and Security, Scheduling, and finally GWT. All were migrated one at a time because sometimes things stopped working for no apparent reason and I could only find the problem when making one small incremental change on a working configuration.

Thus I migrated separate layers of my application more or less independently from each other – this is why good architecture is so important. If you must keep the entire app together and everything works or none of it works – you are going to have a very hard time testing, debugging, maintaining and migrating your application.

One of the nice surprises was that I got rid of a bunch of jar files that I had to use when developing on Jetty for GAE. Since Jetty is only a Servlet engine, I had to evaluate and select 3rd party frameworks to implement REST, JPA, JAXB and other things that my application required. With Liberty, pretty much all of these come well integrated with the runtime and none of the 3rd party libraries need to be in the WEB-INF/lib folder for my application – except for the GWT, which was my choice back in 2012 for the UI. Here are the libraries that I got rid of after having migrated to Liberty from GAE (and Jetty):

  • Since Liberty provides JAX-RS and JAXB APIs, we do not longer need Jersey.
  • Similarly, Liberty comes with a JPA provider, so we can delete DataNucleus libraries (which by the way caused me a lot of trouble during the development back in 2012).
  • I won’t be using services provides by GAE, so can also delete a bunch of GAE and Jetty specific JARs.
  • In 2012 and 2013 GAE supported Java 6, but not Java 7, this means I can remove some Apache Commons JARs.

Here is the list of jar files that I removed from the WEB-INF/lib directory of my app because I moved from Jetty on GAE to Liberty on Bluemix:


That is a total of 31 MB worth of extra jar files! Wow, this is trimming a lot of extra weight from the WAR file! Liberty project is so much smaller than the same WAR file for the GAE because the Liberty runtime includes all of the needed libraries. And this is fairly simple app!

Once I got rid of all of these extra jar files and just kept GWT and HTMLunit JARs the size of my WEB-INF/lib directory went down to 16 MB from former 47MB. For GWT to work I kept gwt-servlet.jar (9 MB) file, and for HTMLunit I created a custom trimmed down version of gwt-dev.jar (which took it down to 7MB).

User Interface

Back in 2012 when I first built this app, I used GWT – primarily because I wanted to learn how to use it and it was popular at the time. It was an interesting experience and relatively easy to use – for my small project anyway. For this migration I decided to keep GWT and later will replace it with something else. To be able to work with GWT and enhance my app, I installed Google Eclipse Plugin for Eclipse Luna. I did not install the GAE plugin – only GWT. None of the code for GWT had to be changed in this migration. I only had to configure the proper environment variables and several Eclipse path settings to compile GWT, as well as add gwt-servlet.jar to the WEB-INF/lib folder – in fact, GWT is the only 3rd party library that I have there. The other one is the small utility project of my own.

One thing about the USAT NCC website is that it only has a HTML interface for human beings and no programmatic API. I used HTMLunit to parse html pages and forms of the NCC web site and extract data from there and drive input into the site. When doing this migration I found out that HTML Unit libraries com.gargoylesoftware.htmlunit.* are now bundled with GWT and I do not need to use my own jar file from the web (I mentioned above that I trimmed gwt-dev.jar from 38MB down to 7MB).

Replace Jersey with Apache Wink

My original application used Jersey for JAX-RS client and JAXB and I could still use it with Liberty if I put the Jersey JAR into my WAR file (like I did with Jetty), but Liberty provides JAX-RS based on Apache Wink and it made sense for me to use that instead of Jersey.

There is REST 2.0 support in the August 2014 Liberty and I should have used that for my client instead of Wink, but that will be an item for the next version of the app. For now I was curious to see how easy it is to move the JAXB and REST client from Jersey to Wink. Turned out to be easy for my simple case:

  1. Refactored Jersey API:
    import com.sun.jersey.api.client.config.ClientConfig;
    import com.sun.jersey.api.client.Client;
    import com.sun.jersey.api.client.WebResource;
    String serverResponse = service.path(servicePath).path(functionPath).queryParams(params).accept(MediaType.TEXT_XML).get(String.class);

    into Apache Wink:

    // See source code for more details: http://bit.ly/1wjGL4n
    import org.apache.wink.client.ClientConfig;
    import org.apache.wink.client.RestClient;
    import org.apache.wink.client.Resource;
    String serverResponse = service.uri(servicePath + "/" + functionPath).queryParams(params).accept(MediaType.TEXT_XML).get(String.class);
  2. To enable classloader resolution at runtime, I added the configuration to my server.xml file to allow my app to use Wink APIs (see “third-party” value added to the classloader properties below). If you have other libraries used by your project, they all need to use the same classloader API visibility. The wink jars are already in the server, so there is no need to put them into your WAR lib folder:
    <!-- see full source for server.xml here: http://bit.ly/1zU5igN -->
    	<webApplication contextRoot="/" id="Tri-Replicator" location="Tri-Replicator.war" name="Tri-Replicator">
    		<classloader apiTypeVisibility="spec,ibm-api,api,third-party" commonLibraryRef="EclipseLinkLib" />
    			<library apiTypeVisibility="spec,ibm-api,api,third-party" filesetRef="EclipseLinkFileset" id="EclipseLinkLib" />

Data persistence

I stated earlier that my original app on GAE used DataNucleus JPA for the persistence layer and stored data in the Google database hosted on GAE (I suspect they are using MySQL). On Bluemix I decided to store my data in a SQL DB service – free up to 100 MB, which is plenty for my app. An interesting and scary story happened a few days after the app was running on Bluemix. I accidentally deleted all of my data from all of the tables using a special admin interface for my app, but thinking it was my local developer instance. My heart almost stopped when I realized what I had done. After searching for recovery for some time, I found that Bluemix SQL DB automatically backs up data every day (and user can make a manual backup at any time). This allowed me to rollback my database to the previous state and not lose any data. Phew……. that was stressful. The next thing I did – was to rewrite the admin user interface in my app so that any data deletion is delayed by 10 minutes so if I do this again I can simply cancel the deletion operation. I bet someone somewhere sometime has done this kind of thing to a much more important system than this. Scary thoughts, but I digress…

Migrating entity classes from DataNucleus JPA to the Liberty provided EclipseLink JPA provider was a fairly simple task, but did require a few changes in persistence.xml and some change of annotations in the entity classes as well as code changes to deal with the initial context lookups and entity manager. Here is the persistence.xml file:

<!-- See source for persistence.xml file here: http://bit.ly/12R7L0q -->
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 	xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">

	<persistence-unit name="TriReplicatorPersistenceUnit" transaction-type="JTA">
			<property name="eclipselink.target-database" value="Derby" />
			<property name="eclipselink.target-server" value="WebSphere_Liberty" />
			<property name="javax.persistence.jdbc.driver" value="org.apache.derby.jdbc.EmbeddedDriver" />
 			<property name="eclipselink.ddl-generation" value="create-tables" />
			<!-- <property name="eclipselink.ddl-generation" value="drop-and-create-tables" />  -->
			<property name="eclipselink.ddl-generation.output-mode" value="database" />
			<!-- For debugging use value="FINEST". For more details on logging see this: https://wiki.eclipse.org/EclipseLink/Examples/JPA/Logging#Log_Levels -->
			<property name="eclipselink.logging.level" value="INFO" />
			<property name="javax.persistence.jtaDataSource" value="java:comp/env/jdbc/TriReplicatorDB" />

Here is one of my entity classes with JPA annotations:

// See source code for this file here: http://bit.ly/1slJTZv
@Entity(name = "WORKOUTS")
public class Workout {
    public static final String tableName = "WORKOUTS";
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    @OneToOne(cascade = CascadeType.ALL)
    private WorkoutSession workout;

    // TODO - this really needs to have the relationship with the Users database, but in DataNucleus JPA it does not work properly - since this works in Liberty JPA really need to use this, instead of storing iserId.
    // @ManyToOne private User user;
    private Long userId; 

    public Long getId() {
        return id;
// ... more getters, and setters, and other methods follow
// ...

None of my business logic operates on entities directly. For that I use a special DatabaseAccess class, which hides all persistence details from the rest of the application:

// See full source code of the file DatabaseAccess.java here: http://bit.ly/1upbrx4
public class DatabaseAccess {
	public List<User> listUsers() throws DatabaseException {
		Query q = em.createQuery("select u from " + User.tableName + " u");
		List<User> users = q.getResultList();
		if (users == null)
			users = new ArrayList<User>();
		return users;
	public int removeUser(String nameTP, String nameUSAT) throws DatabaseException {
		Query q = em.createQuery("delete from " + User.tableName + " u where (u.nameUSAT = :nameUSAT) and (u.nameTP = :nameTP)");
		q.setParameter("nameUSAT", nameUSAT);
		q.setParameter("nameTP", nameTP);
		int i = q.executeUpdate();
		return i;
	public List<Workout> listWorkouts() throws DatabaseException {
		Query q = em.createQuery("select u from " + Workout.tableName + " u");
		List<Workout> workouts = q.getResultList();
		return workouts;
	public void addAdminEvent(AdminEvents event) throws DatabaseException {
	public int deleteAllWorkouts() throws DatabaseException {
		Query q = em.createQuery("delete from " + Workout.tableName + " w");
		int i = q.executeUpdate();
		return i;
	public int deleteAllUsers() throws DatabaseException {
		Query q = em.createQuery("delete from " + User.tableName + " u");
		int i = q.executeUpdate();
		return i;
	public void longTransactionBegin() throws DatabaseException {
		automaticTransaction = true;
		automaticTransaction = false;
	public void transactionBegin() throws DatabaseException {
		try {
			em = (EntityManager) ctx.lookup(JNDI_NAME);
		} catch (NamingException e) {
			log.severe("Error while looking up EntityManager from Initial Context: " + e.getMessage());
			throw new DatabaseException(e.getMessage());

		if (automaticTransaction) {
			try {
				transaction = (UserTransaction) ctx.lookup("java:comp/UserTransaction");
			} catch (NamingException | NotSupportedException | SystemException e) {
				log.severe("Error while starting transaction: " + e.getMessage());
				throw new DatabaseException(e.getMessage());
	public void longTransactionCommit() throws DatabaseException {
		automaticTransaction = true;
	public void transactionCommit() throws DatabaseException {
		if (automaticTransaction) {
			try {
			} catch (IllegalStateException | SecurityException | HeuristicMixedException | HeuristicRollbackException | RollbackException
					| SystemException e) {
				log.severe("Error while commiting transaction: " + e.getMessage());
				throw new DatabaseException(e.getMessage());
			if (!containerManagedTransaction) {

Scheduled jobs

Tri-Replicator is designed so that the end user enters his user name and password in early December and never needs to come back to the application again. All of the work of copying workouts from trainingpeaks to the USAT NCC website is done automatically, and at the end of the competition (last day of February every year) the database is cleaned of all user data. From March 1st until December 1 the application does nothing. On December 1st users can register again and the cycle repeats.

To perform the replication of the data automatically, I used the GAE cron.xml file, which I simply placed into the WEB-INF directory of my war file. In this file I configured that every 2 hours a special Servlet will be called so it runs the replication task for all registered users. The advantage of this approach is that it is very simple to configure, but the problem is that it is limited in what it can do. Additionally, since GAE shuts down the Jetty instance when there are no user requests, scheduling threads programmatically will not work at all.

Unlike GAE, in Bluemix my Liberty instance is kept up and running at all times (unless I shut it down), so I simply used APIs provided by java.util.concurrent.* in the Liberty beta and scheduled the replication task programmatically:

        // See full source code for this file here: http://bit.ly/1wEr6Qg
       @Resource(lookup = "concurrent/ReplicationExecutor")
       private static ManagedScheduledExecutorService replicationExecutor;
       private void scheduleRegularReplication() {
		Runnable replicationTask = new Runnable() {
			public void run() {
				try {
					new SynchronizerServiceImpl().replicateWorkoutsForAllUsers();
				} catch (TrainingLogException e) {

		if (replicationExecutor != null) {
			replicationExecutor.scheduleAtFixedRate(replicationTask, 1, getConfig().getReplicationFrequencyMinutes(), TimeUnit.MINUTES);
		} else
			log.severe("Unable to schedule regular replication because executor = null");

Deployment into IBM Bluemix

IBM Bluemix is a commercial Platform as a Service (PaaS) offering based on the CloudFoundry project. My application makes use of two services provided by Bluemix:

  1. IBM Liberty for Java runtime, and
  2. SQL Database (based on IBM DB2)

To get started, you may want to read detailed tutorials found on the Bluemix website. Here is what I did:

  1. Registered (for free) on the Bluemix website.
  2. Logged into the Bluemix control panel.
  3. Created a new space and called it “dev”.
  4. In my space, under the “Applications” category, clicked on the icon “CREATE AN APP”.
  5. From the selection of 13 different boilerplates (quick starts) and 5 choices of runtimes, I scrolled down and selected “Liberty for Java” runtime and filled out the name of my application and Host name.

This is all you need to do on the Bluemix website. The whole thing above took just a few minutes to complete. No software installation, no configuration required. Very easy indeed. So far so good.

Now that we have our basic runtime, it is time to package and push an app to the cloud. Steps below are all done on a local developer machine:

  1. As you develop your application and test it with the local instance of Liberty – you will want to deploy it to Bluemix every now and then. This is where the steps below come in handy. First you export your application WAR file from Eclipse (or build it with maven or whatever tool you use) and put it into the same directory where you run these scripts:
    cp $APP_PATH/bin/Tri-Replicator.war ./
  2. Now it is time to create the CloudFoundry deployment descriptor for our application. In the current directory (where you run all the other Bluemix commands) create a text file that looks something like this (manifest.yml):
       - disk_quota: 1024M
       host: Tri-Replicator
       path: CleanServer2.zip
       domain: mybluemix.net
       instances: 1
       memory: 512M
  3. Download and install the CloudFoundry CLI for your OS (very easy – takes only a few minutes).
  4. I have Cygwin installed on my Windows 7 laptop and wrote a bash script to execute the packaging and deployment steps quickly. First you need to setup your environment variables (JAVA_HOME, PATH and few other variables I am using in my script):
    # -------- See source for file setenv.sh here: http://bit.ly/1DgTfzZ
    export CF_COLOR=true
    export BLUEMIX_DB_NAME=SQLDB_TriReplicator
    export BLUEMIX_APP_NAME=TriReplicator
    export BLUEMIX_TARGET=Tri-Replicator
    export LIBERTY_INSTANCE=CleanServer2
    export WAR_FILE=Tri-Replicator.war
    export PROJ_PATH=/cygdrive/C/projects_c/Tri-Replicator-Bluemix
    export LIBERTY_HOME=$PROJ_PATH/runtimes/Liberty-8-5-5-3-tri-replicator
    export JAVA_HOME=$PROJ_PATH/jdk.1.7
    export PATH=$PATH:$JAVA_HOME/bin:$PROJ_PATH/bluemix:$LIBERTY_HOME/bin
  5. Next you prepare your CloudFoundry client with the right connection information for the Bluemix cloud. This script also creates your SQL DB instance and binds it to the Liberty Runtime we created earlier. You only need to run this once:
    # See source for file cf_setup.sh here: http://bit.ly/1slKeLL
    # Initial CF connection
    cf api https://api.ng.bluemix.net
    cf login -u romanik -p password
    cf target -o $BLUEMIX_TARGET -s dev
    # Create database service - the first attribute is the service name, the second attribute is the plan, and the last attribute is the unique name you are giving to this service instance.
    cf create-service sqldb sqldb_small $BLUEMIX_DB_NAME
    # I am using Liberty beta because I have feature Concurrent-1.0 in my server.xml, which is only available in beta
    # Bind the database to my app
    # Before binding an app - be sure to manually create the app via browser - this is only done once
    cf bind-service $BLUEMIX_APP_NAME $BLUEMIX_DB_NAME
    # Restage the app so that it picks up all settings above
    cf restage $BLUEMIX_APP_NAME
  6. All of the steps above are only done once. The next step will be repeated many times as you keep working on your application. You package the Liberty server along with the WAR file to create a unit of deployment for Bluemix and push the package to the cloud Bluemix instance (note that I am using Liberty beta because I have feature Concurrent-1.0 in my server.xml, which is only available in beta):
    # -------- See source for file cf_push.sh here: http://bit.ly/1zB5ZgV
    # Package Liberty server along with the app
    echo "Did you export the most recent version of the WAR file from Eclipse? (this is needed before packaging the server)"
    #  Temporarily switch around the Eclipse way of deployment with the full war file - we will later switch back
    mv $LIBERTY_HOME/usr/servers/$LIBERTY_INSTANCE/apps/* ./temp
    # Finally package the server
    $LIBERTY_HOME/bin/server package $LIBERTY_INSTANCE --include=usr --archive=../../../../../bluemix/$LIBERTY_INSTANCE.zip
    # Switch back the files so that local Liberty test server uses the xml and not the war file
    rm -rf $LIBERTY_HOME/usr/servers/$LIBERTY_INSTANCE/apps/$WAR_FILE
    mv ./temp/* $LIBERTY_HOME/usr/servers/$LIBERTY_INSTANCE/apps
    # Push an app as packaged server
    cf push $BLUEMIX_APP_NAME -m 512m

The process of deployment looks something like this. Please note that the 18.9 MB size of the app is because of my use of the new (beta) EclipseLink JPA provider, GWT and HTMLunit jars – otherwise it would have been just a few KB.

Once the deployment is complete, the application is up and running on the IBM Bluemix PaaS and you can monitor and manage your application on the Bluemix dashboard:


To start using the application, simply click on the link next to the ROUTES label as shown in the picture above. Here is the link to my application if you want to give it a try: https://tri-replicator.mybluemix.net/.


Since I have not done a similar migration before, it took me a few days to move this simple application from GAE to Liberty and Bluemix. This is considering that I had prior experience with GAE and Liberty and Bluemix. For someone who does not have such experience you’ll need to add few days to learn Liberty and Bluemix before starting a similar migration project.

The number of lines of Java code does not matter as much as the number of different frameworks and APIs used. Once you migrate a single JPA entity, it is very easy to move the next 20 of them. Same thing with REST, JAXB, etc.

It is really not the size of the application that matters for migration, but its architecture and complexity. It will come as no surprise that well designed applications with layers of abstraction are much easier to migrate than brittle monolithic applications. As always – good design makes a very big difference long term.

Categories: Migration

Tags: , , , , , , ,

1 reply

  1. Good write-up! I’m not sure you need the gwt jars. They should just be needed for the js compilation, especially if you’re using REST and not GWT-RPC. GWT-dev also shouldn’t be needed in your final application.


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: