Problems with ORMs Part 2 – Queries

In my previous post on Object-Relational Mapping tools (ORMs), I discussed various issues that I’ve faced dealing with the common ORMs out there today, including Hibernate. This included issues related to generating a schema from POJOs, real-world performance and maintenance problems that crop up. Essentially, the conclusion is that ORMs get you most of the way there, but a balanced approach is needed, and sometimes you just want to avoid using your ORM’s toolset, so you should be able to bypass it when desired.

One huge flaw in modern ORMs that I see though is that they really want to help you solve all your SQL problems. What do I mean by this why would I say this is a fault? Well, I believe that Hibernate et al just try too hard and end up providing features that actually hurt developers more than they help. The main thing I have in mind when I say this is query support. Actual support for complex queries that are easily maintained is seriously lacking in ORMs and not because they’ve omitted things — it’s just because the tools they provide don’t use SQL, which was designed from the ground up for exactly this purpose.

Experiences in Hibernate

It’s been my experience that when you use features like HQL, frequently you’re thinking about saving yourself a few minutes up front, and there’s nothing wrong with this in itself, but it can cause serious problems. It’s my experience that frequently you end up wanting or needing to replace HQL with something more flexible, either because of a bug fix or enhancement, and this is where the trouble starts.

I consider myself an experienced developer and I pride myself on (usually) not breaking things — to me, that is one of the hallmarks of good developers. When you’re faced with ripping out a piece of code and replacing it wholesale, such as replacing HQL with SQL, you’re basically replacing code that has had a history that includes bug fixes, enhancements and performance tweaks. You are now responsible for duplicating every change to this code that’s ever been made and it’s quite possible you don’t understand the full scope of the changes or the niggling problems that were corrected in the past.

Note that this also applies to all the other query methods that Hibernate provides, including the Query API, and through extension, query support within the JPA. The issue is that you don’t want a solution that is brittle or limited that it has to be fully replaced later. This means that if you need to revert to SQL to get things done, there’s a good chance you should just do that in the first place. This same concept applies to all areas of software development.

So what do we aim for if the basic querying support in ORMs like Hibernate isn’t good enough?

Criteria for a Solid ORM

Bsaically, my personal requirements for an ORM come down to the following:

  • Schema first – generate your model from a database, not the other way around. If you have a platform-agnostic way of specifying DDL for the database, great, but it’s not a deal-breaker. Generating a database from some other domain-specific language or format helps nobody and results in a poorly designed schema.
  • SQL only – if you want to help me avoid writing code, then generate/expose key-based, etc. lookups for me. Don’t ask me to use your query API or some new query language. SQL was invented for queries, so let me use the right tool.
  • Give me easy ways to populate my domain objects from queries. This gives me 99% of what I’ll ever need, while giving me flexibility.
  • Allow me to populate arbitrary Java beans with query results – don’t tie me into your registry of known types.
  • Don’t force me into using a typical transaction container like the one Hibernate or Spring provides – they are a disaster and I’ve never see a practical use for them that made any sense. Let me handle where connections/transactions are acquired and released in my application – typically this only happens in a few places with clear semantics anyway. This can be some abstracted version of JDBC, but let me control it.
  • No clever/magic behaviour in my domain objects – when working with Hibernate, I spend a good time solving the same old proxy and lazy-loading issues. They never end and can’t be solved once-and-for-all which indicates a serious design issue.

Though these points seem completely reasonable to me, I’ve not encountered any ORMs that really meet my expectations, so at Carfey we’ve rolled our own little ORM, and I have to say that weekend projects and just general development with what we have is far easier and faster than Hibernate or the other ORMs I’ve used. What does it provide?

A Simple Utilitarian ORM

  • Java domain classes are generated from a DB schema. There’s no platform-agnostic DDL yet, but it’s on our TODO list. Beans include support for child collections, FK references, but it’s all lazy and optional – the beans support it, but if you don’t use them, there’s no impact. Use IDs directly if you want, or domain objects themselves. Persistence handles persisting dirty objects only, and saves are only done when requested – no magic flush behaviour.
  • Generated domain classes are for persistence only! Stick your business logic, etc. elsewhere.
  • SQL is used for all lookups, including primary key fetches and foreign key relationships. If you need to enhance a lookup, just steal the generated SQL and build on it. Methods and SQL is generated automatically from any indexed column so they are all provided for you automatically and are typesafe. This also provides a warning to the developer – if a lookup is not available in your domain class, it likely will perform poorly since no index exists.
  • Any domain class can be populated from a custom query in a typesafe manner – it’s flexible but easy to use.
  • Improved classes hide the standard JDBC types such as Connnection and Statement for ease of use, but we don’t force any transaction semantics on you, and you can always fall back to things like direct result set handling.
  • Some basic required features like a connection pool, database metadata, and soon, database slave failover.

We at Carfey don’t believe we’ve created some incredible new ORM that surpasses every other effort out there, and there are many features we’d have to add if this was a public project, but what we have works for us, and I think we have the correct approach. And at the very least, hopefully our experience can help you choose how you use your preferred ORM wisely and not spend too much time serving the tool instead of delivering software.

As a final note, if you have experience with ORMs that meet my list of requirements above and you’ve had good experiences with it, I’ve love to hear about it and would consider it for future Carfey projects.

Why you shouldn’t use Quartz Scheduler

If you need to schedule jobs in Java, it is fairly common in the industry to use Quartz directly or via Spring integration. Quartz’ home page at the time of writing claims that using quartz is a simple 3-step process: Download, add to app, execute jobs when you need to. For any of you that actually have experience with Quartz, this is truly laughable.

First of all, adding the quartz library to your app does not begin to ready your application to schedule jobs. Getting your code to run in a schedule with quartz is anything but straightforward. You have to write your implementation of the job interface, then you have to construct large xml configuration files or add code to your application to build new instances of JobDetails, Triggers using complex api such as
 .withIdentity("myTrigger", "group1").startNow().withSchedule(simpleSchedule()
.withIntervalInSeconds(40).repeatForever()).build()
and then schedule them using Schedule instance from a ScheduleFactory. All this is code you have to write for each job or the equivalent xml configuration. Quite the headache for something that was supposed to simply be “execute jobs when you need to”. Even in Quartz’ tutorial, it takes 6 lessons to setup a job. What happens when you need to make a change to a job’s schedule? Temporarily disable a job? Change the parameters bound to a job? All these require a build/test/deploy cycle which is impractical for any organization.

Quartz is also deficient in its feature set. Out of the box, it is just a code library for job execution. No monitoring console for reviewing errors and history, no useful and reasonably searchable logging, no support for multiple execution nodes, no administration interface, no alerts or notifications, inflexible and buggy recovery mechanisms for failed jobs and missed jobs.

Quartz does provide add-on support for multiple nodes, but it requires additional advanced configuration. Quartz also provides an add-on called Quartz Manager, it too needs additional advanced configuration, is a flash app and is incredibly cumbersome and impractical to use.

Simply put, Quartz doesn’t meet these basic needs:

  • No out of the box support for multiple execution nodes (pooling or clustering)
  • No administration UI that allows all job scheduling and configuration to be done outside of code
  • No monitoring
  • No alerts
  • Insufficient mechanisms for dealing with errors/failures and recovery

All this means Quartz is not really a justifiable choice as an enterprise scheduler. It is feature poor and has high implementation and ongoing utilization costs in terms of time and energy.

Obsidian Scheduler is a great choice for your java-based applications. You truly can be up and running the same day you download it. We have a live, interactive demo where you can try out the interface and see first-hand how easy it is to add/change/disable jobs, to monitor all the node activity, disable/enable nodes and even take advantage of advanced schedule configuration such as chaining and sticky nodes.

In addition to our product website, we’ve discussed Obsidian’s standout features many times here on our blog. Download it today and give it a try!

Easy Deep Cloning of Serializable and Non-Serializable Objects in Java

Frequently developers rely on 3d party libraries to avoid reinventing the wheel, particularly in the Java world, with projects like Apache and Spring so prevalent. When dealing with these frameworks, we often have little or no control of the behaviour of their classes.
This can sometimes lead to problems. For instance, if you want to deep clone an object that doesn’t provide a suitable clone method, what are your options, short of writing a bunch of code?

Clone through Serialization
The simplest approach is to clone by taking advantage of an object being Serializable. Apache Commons provides a method to do this, but for completeness, code to do it yourself is below also.

@SuppressWarnings("unchecked")
public static  T cloneThroughSerialize(T t) throws Exception {
   ByteArrayOutputStream bos = new ByteArrayOutputStream();
   serializeToOutputStream(t, bos);
   byte[] bytes = bos.toByteArray();
   ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(bytes));
   return (T)ois.readObject();
}

private static void serializeToOutputStream(Serializable ser, OutputStream os) 
                                                          throws IOException {
   ObjectOutputStream oos = null;
   try {
      oos = new ObjectOutputStream(os);
      oos.writeObject(ser);
      oos.flush();
   } finally {
      oos.close();
   }
}

// using our custom method
Object cloned = cloneThroughSerialize (someObject);

// or with Apache Commons
cloned = org.apache.commons.lang. SerializationUtils.clone(someObject);

But what if the class we want to clone isn’t Serializable and we have no control over the source code or can’t make it Serializable?

Option 1 – Java Deep Cloning Library
There’s a nice little library which can deep clone virtually any Java Object – cloning. It takes advantage of Java’s excellent reflection capabilities to provide optimized deep-cloned versions of objects.

Cloner cloner=new Cloner();
Object cloned = cloner.deepClone(someObject);

As you can see, it’s very simple and effective, and requires minimal code. It has some more advanced abilities beyond this simple example, which you can check out here.

Option 2 – JSON Cloning
What about if we are not able to introduce a new library to our codebase? Some of us deal with approval processes to introduce new libraries, and it may not be worth it for a simple use case.

Well, as long as we have some way to serialize and restore an object, we can make a deep copy. JSON is commonly used, so it’s a good candidate,since most of us use one JSON library or another.

Most JSON libraries in Java have the ability to effectively serialize any POJO without any configuration or mapping required. This means that if you have a JSON library and cannot or will not introduce more libraries to provide deep cloning, you can leverage an existing JSON library to get the same effect. Note this method will be slower than others, but for the vast majority of applications, this won’t cause any performance problems.

Below is an example using the GSON library.

@SuppressWarnings("unchecked")
public static  T cloneThroughJson(T t) {
   Gson gson = new Gson();
   String json = gson.toJson(t);
   return (T) gson.fromJson(json, t.getClass());
}
// ...
Object cloned = cloneThroughJson(someObject);

Note that this is likely only to work if the copied object has a default no-argument constructor. In the case of GSON, you can use an instance creator to get around this. Other frameworks have similar concepts, so you can use that if you hit an issue with an unmodifiable class having not having the default constructor.

Conclusion
One thing I do recommend is that for any classes you need to clone, you should add some unit tests to ensure everything behaves as expected. This can prevent changes to the classes (e.g. upgrading library versions) from breaking your application without your knowledge, especially if you have a continuous integration environment set up.

I’ve outlined a couple methods to clone an object outside of normal cases without any custom code. If you’ve used any other methods to get the same result, please share.

Ignoring Self-Signed Certificates in Java

A problem that I’ve hit a few times in my career is that we sometimes want to allow self-signed certificates for development or testing purposes. A quick Google search shows the trouble that countless Java developers have run into over the years. Depending on the exact certificate issue, you may get an error like one of the following, though I’m almost positive there are other manifestations:

java.security.cert.CertificateException: Untrusted Server Certificate Chain


javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

Getting around this often requires modifying JDK trust store files which can be painful and often you end up running into trouble. On top of that, every developer on your team will have to do the same thing, and every new environment you’ll have the same issues recurring.

Fortunately, there is a way to deal with the problem in a generic way that won’t put any burden on your developers. We’re going to focus on vanilla HttpURLConnection type connections since it is the most general and should still help you understand the direction to take with other libraries. If you are using Apache HttpClient, see here.

Warning: Know what you are doing!

Be aware of what using this code means: it means you don’t care at all about host verification and are using SSL just to encrypt communications. You are not preventing man-in-the-middle attacks or verifying you are connected to the host you think you are. This generally comes down to a few valid cases:

  1. You are operating in a locked down LAN environment. You are not susceptible to having your requests intercepted by an attacker (or if you are, you have bigger issues).
  2. You are in a test or development environment where securing communication isn’t important.

If this matches your needs, then go ahead and proceed. Otherwise, maybe think twice about what you are trying to accomplish.

Solution: Modifying Trust Managers

Now that we’re past that disclaimer, we can solve the actual problem at hand. Java allows us to control the objects responsible for verifying a host and certificate for a HttpsURLConnection. This can be done globally but I’m sure those of you with experience will cringe at the thought of making such a sweeping change. Luckily we can also do it on a per-request basis, and since examples of this are hard to find on the web, I’ve provided the code below. This approach is nice since you don’t need to mess with swapping out SSLSocketFactory implementations globally.

Feel free to grab it and use it in your project.

package com.mycompany.http;

import java.net.*;
import javax.net.ssl.*;
import java.security.*;
import java.security.cert.*;

public class TrustModifier {
   private static final TrustingHostnameVerifier 
      TRUSTING_HOSTNAME_VERIFIER = new TrustingHostnameVerifier();
   private static SSLSocketFactory factory;

   /** Call this with any HttpURLConnection, and it will 
    modify the trust settings if it is an HTTPS connection. */
   public static void relaxHostChecking(HttpURLConnection conn) 
       throws KeyManagementException, NoSuchAlgorithmException, KeyStoreException {

      if (conn instanceof HttpsURLConnection) {
         HttpsURLConnection httpsConnection = (HttpsURLConnection) conn;
         SSLSocketFactory factory = prepFactory(httpsConnection);
         httpsConnection.setSSLSocketFactory(factory);
         httpsConnection.setHostnameVerifier(TRUSTING_HOSTNAME_VERIFIER);
      }
   }

   static synchronized SSLSocketFactory 
            prepFactory(HttpsURLConnection httpsConnection) 
            throws NoSuchAlgorithmException, KeyStoreException, KeyManagementException {

      if (factory == null) {
         SSLContext ctx = SSLContext.getInstance("TLS");
         ctx.init(null, new TrustManager[]{ new AlwaysTrustManager() }, null);
         factory = ctx.getSocketFactory();
      }
      return factory;
   }
   
   private static final class TrustingHostnameVerifier implements HostnameVerifier {
      public boolean verify(String hostname, SSLSession session) {
         return true;
      }
   }

   private static class AlwaysTrustManager implements X509TrustManager {
      public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException { }
      public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException { }
      public X509Certificate[] getAcceptedIssuers() { return null; }      
   }
   
}

Usage

To use the above code, just call the relaxHostChecking() method before you open the stream:

URL someUrl = ... // may be HTTPS or HTTP
HttpURLConnection connection = (HttpURLConnection) someUrl.openConnection();
TrustModifier.relaxHostChecking(connection); // here's where the magic happens

// Now do your work! 
// This connection will now live happily with expired or self-signed certificates
connection.setDoOutput(true);
OutputStream out = connection.getOutputStream();
...

There you have it, a complete example of a localized approach to supporting self-signed certificates. This does not affect the rest of your application which will continue to have strict hosting checking semantics. This example could be extended to use a configuration setting to determine whether relaxed host checking should be used, and I recommend you do so if using this code is primarily a way to facilitate development with self-signed certificates.

If you have any questions about the example or have questions about doing this in a specific HTTP library, leave a post and I’ll do my best to help.

Java 7 – Project Coin Feature Overview

We discussed previously everything that didn’t make it into Java 7 and then reviewed the useful Fork/Join Framework that did make it in. Today’s post will take us through each of the Project Coin features – a collection of small language enhancements that aren’t groundbreaking, but are nonetheless useful for any developer able to use JDK 7.

I’ve come up with a bank account class that showcases the basics of Project Coin features. Take a look…

public class ProjectCoinBanker {
  
  private static final Integer ONE_MILLION = 1_000_000;
  private static final String RICH_MSG = "You need more than $%,d to be considered rich.";

  public static void main(String[] args) throws Exception {
	System.out.println(String.format(RICH_MSG, ONE_MILLION));

	String requestType = args[0];
	String accountId = args[1];
	switch (requestType) {
		case "displayBalance": 
			printBalance(accountId); 
			break;
		case "lastActivityDate" : 
			printLastActivityDate(accountId); 
			break;
		case "amIRich" : 
			amIRich(accountId); 
			break;
		case "lastTransactions" : 
			printLastTransactions(accountId, Integer.parseInt(args[2])); 
			break;
		case "averageDailyBalance" : 
			printAverageDailyBalance(accountId); 
			break;
		default: break;
	}
  }
  
  private static void printAverageDailyBalance(String accountId) {
        String sql = String.format(AVERAGE_DAILY_BALANCE_QUERY, accountId);
	try (
	      PreparedStatement s = _conn.prepareStatement(sql);
              ResultSet rs = s.executeQuery();
             ) {
	        while (rs.next()) {
		  //print the average daily balance results
                }
	     } catch (SQLException e) {
		// handle exception, but no need for finally to close resources
                for (Throwable t : e.getSuppressed()) {
		   System.out.println("Suppressed exception message is " + t.getMessage());
		}
	     }
  }
  
  private static void printLastTransactions(String accountId, int numberOfTransactions) {
	List<Transaction> transactions = new ArrayList<>();
	... handle fetching/printing transactions
  }
  
  private static void printBalance(String accountId) {
	try {
		BigDecimal balance = getBalance(accountId);
		//print balance
	} catch (AccountFrozenException | ComplianceViolationException | AccountClosedException e) {
	    System.err.println("Please see your local branch for help with your account.");
	}
  }
  
  private static void amIRich(String accountId) {
	try {
		BigDecimal balance = getBalance(accountId);
		//find out if the account holder is rich
	} catch (AccountFrozenException | ComplianceViolationException | AccountClosedException e) {
	    System.out.println("Please see your local branch for help with your account.");
	}
  }
  
  private static BigDecimal getBalance(String acccountId) 
      throws AccountFrozenException, AccountClosedException, ComplianceViolationException {
      ... getBalance functionality
  }
 
}

Briefly, our ProjectCoinBanker class demonstrates basic usage of the following Project Coin features.

  • Underscores in numeric literals
  • Strings in switch
  • Multi-catch
  • Type inference for typed object creation
  • try with resources and suppressed exceptions

First of all, underscores in numeric literals are pretty self-explanatory. Our example, private static final Integer ONE_MILLION = 1_000_000; shows that the benefit is visual. Developers can quickly look at the code to verify that values are as expected. Underscores can be used in places other than natural groupings locations, being ignored wherever they are placed. Underscores in numeric literals cannot begin or terminate a numeric literal, otherwise, you’re free to place them where you’d like. While not demonstrated here, binary literal support has also been added. In the same way that hex literals are prefixed by 0x or 0X, binary literals would be prefixed by 0b or 0B.

Strings in switch are also self-explanatory. The switch statement now also accepts String. In our example, we switch on String argument passed to the main method to determine what request was made. On a side note, this is purely a compiler implementation with an indication that JVM support for switching on String may be added at a later date.

Type inference is another easy-to-understand improvement. Now instead of our old code List<Transaction> transactions = new ArrayList<Transaction>(); we can simply do List<Transaction> transactions = new ArrayList<>(); since the type can be inferred. Probably won’t find anyone who would argue that it shouldn’t have been so since the introduction of generics, but it’s now here.

Multi-catch will turn out to be very nice for the conciseness of exception handling code. Too many times when wanting to actually do something based on the exception type thrown, until now we were forced to have multiple catch blocks all doing essentially the same thing. The new syntax is very clean and logical. Our example, catch (AccountFrozenException | ComplianceViolationException | AccountClosedException e) shows how easily it can be done.

Finally, the last demonstration of a Project Coin feature is the try with resources syntax and support for retrieving suppressed exceptions. A new interface has been introduced, AutoCloseable that has been applied to all the expected suspects including Input/OutputStreams, Readers/Writers, Channels, Sockets, Selectors and java.sql resources Statement, ResultSet and Connection. In my opinion, the syntax is not as natural as the multi-catch change was, not that I have an alternative in mind.

    try (
        PreparedStatement s = _conn.prepareStatement(sql);
        ResultSet rs = s.executeQuery();
        ) {
	    while (rs.next()) {
	      //print the average daily balance results
            }
	} catch (SQLException e) {
	    //handle exception, but no need for finally to close resources
            for (Throwable t : e.getSuppressed()) {
	        System.out.println("Suppressed exception message is " + t.getMessage());
	    }
	}

First we see that we can include multiple resources in try with resources – very nice. We can even reference previously declared resources in the same block as we did with our PreparedStatement. We still handle our exception, but we don’t need to have a finally block just to close the resources. Notice too that there is a new method getSuppressed() on Throwable. This allows us to access any Exceptions that were thrown in trying to “autoclose” the declared resources. There can be at most one suppressed exception per resource declared. Note: if the resource initialization throws an exception, it would be handled in your declared catch block.

That’s it. Nothing earth-shattering but some simple enhancements that we can all begin using without too much trouble. Project Coin also includes a feature regarding varargs and compiler warnings. Essentially, it boils down to a new annotation (@SafeVarargs) that can be applied at the method declaration to allow developers to remove @SuppressWarnings("varargs") from their consuming code. This has been applied on all the key suspects within the JDK, but the same annotation is available to you in any of your genericized varags methods.

The Project Coin feature set as it is described online is inconsistent at best. Hopefully this will give you a solid summary of what you can use from the proposal in JDK 7.

Testing GWT Apps with Selenium or WebDriver

Good functional testing is one of the most difficult tasks for web application developers and their teams. It is a challenge to develop tests that are cheap to maintain and yet provide good test coverage, which helps reduce QA costs and increase quality.

Both Selenium and WebDriver (which is essentially now the successor to Selenium) provide a good way to functionally test web applications in multiple target environments without manual work. In the past, web UIs were built using the page navigation to allow users to submit forms, etc. These days, more and more web applications use Ajax and therefore act and look a lot more like desktop applications. However, this poses problems for testing – Selenium and WebDriver are designed to work with user interations resulting in page navigation and don’t play well with AJAX apps out of the box.

GWT-based applications in particular have this problem, but there are some ways I’ve found to develop useful and effective tests. GWT also poses other issues in regards to simulating user input and locating DOM elements, and I discuss those below. Note that my code examples use Groovy to make them concise, but they can be pretty easily converted to Java code.

Problem 1: Handling Asynchronous Changes

One issue that developers face pretty quickly when testing applications based on GWT is detecting and waiting for a response to user interaction. For example, a user may click a button which results in an AJAX call which would either succeed and close a window or, alternatively, show an error message. What we need is a way to block until we see the expected changes, with a timeout so we can fail if we don’t see the expected changes.

Solution: Use WebDriverWait

The easiest way to do this is by taking advantage of the WebDriverWait (or Selenium’s Wait). This allows you to wait on a condition and proceed when it evaluates to true. Below I use Groovy code for the conciseness of using closures, but the same can be done in Java, though with a bit more code due to the need for anonymous classes.

def waitForCondition(Closure closure) { 
    int timeout = 20
    WebDriverWait w = new WebDriverWait(driver, timeout) 
    w.until({
        closure() // wait until this closure evaluates to true
    } as ExpectedCondition)
}
	   
def waitForElement(By finder) {
    waitForCondition {
        driver.findElements(finder).size() > 0;
    }
}

def waitForElementRemoval(By finder) {
    waitForCondition {
        driver.findElements(finder).size() == 0;
    }
}

// now some sample test code 

submitButton.click() // submit a form

// wait for the expected error summary to show up
waitForElement(By.xpath("//div[@class='error-summary']"))
// maybe some more verification here to check the expected errors

// ... correct error and resubmit

submitButton.click() 
waitForElementRemoval(By.xpath("//div[@class='error-summary']"))
waitForElementRemoval(By.id("windowId"))

As you can see from the example, your code can focus on the actual test logic while handling the asynchronous nature of GWT applications seamlessly.

Problem 2: Locating Elements when you have little control over DOM

In web applications that use templating (JSPs, Velocity, JSF, etc.), you have good control and easy visibility into the DOM structure that your pages will have. With GWT, this isn’t always the case. Often, you’re dealing with nested elements that you can’t control at a fine level.

With WebDriver and Selenium, you can target elements using a few methods, but the most useful are by DOM element ID and XPath. How can we leverage these to get maintainable tests that don’t break with minor layout changes?

Solution: Use XPath combined with IDs to limit scope

In my experience, to develop functional GWT tests in WebDriver, you should use somewhat loose XPath as your primary means of locating elements, and supplement it by scoping these calls by DOM ID, where applicable.

In particular, use IDs at top level elements like windows or tabs that are unique in your application and won’t exist more than once in a page. These can help scope your XPath expressions, which can look for window or form titles, field labels, etc.

Here are some examples to get you going. Note that we use // and * in our XPath to keep our expressions flexible so that layout changes do not break our tests unless they are major.

By byUserName = By.xpath("//*[@id='userTab']//*[text()='User Name']/..//input")
WebElement userNameField = webDriver.findElement(byUserName)
userNameField.sendKeys("my new user")

// maybe a user click and then wait for the window to disappear
By submitLocator = By.xpath("//*[@id='userTab']//input[@type='submit']")
WebElement submit = webDriver.findElement(submitLocator)
submit.click()

// use our helper method from Problem 1
waitForElementRemoval By.id("userTab")

Problem 3: Normal element interaction methods don’t work!

GWT and derivatives (Vaadin, GXT, etc.) often are doing some magic behind the scenes as far as managing the state of the DOM goes. To the developer, this means you’re not always dealing with plain <input> or <select>, etc. elements. Simply setting the value of the field through normal means may not work, and using WebDriver or Selenium’s click methods may not work.

WebDriver has improved in this regard, but issues still persist.

Solution: Unfortunately, just some workarounds

The main problems you’re likely to encounter relate to typing into fields and clicking elements.

Here are some variants that I have found necessary in the past to get around clicks not working as expected. Try them if you are hitting issues. The examples are in Selenium, but they can be adapted to the corresponding calls in WebDriver if you require them. You may also use the Selenium adapter for WebDriver (WebDriverBackedSelenium) if you want to use the examples directly.

Click Issues

Sometimes elements won’t respond to a click() call in Selenium or WebDriver. In these cases, you usually have to simulate events in the browser. This was true more of Selenium before 2.0 than WebDriver.

// Selenium's click sometimes has to be simulated with events.
def fullMouseClick(String locator) {
    selenium.mouseOver locator
    selenium.mouseDown locator
    selenium.mouseUp locator
}

// In some cases you need only mouseDown, as mouseUp may be
// handled the same as mouseDown.
// For example, this could result in a table row being selected, then deselected.
def mouseOverAndDown(String locator) {
    selenium.mouseOver locator
    selenium.mouseDown locator
}

Typing Issues

These are the roundabout methods of typing I have been able to use successfully in the past when GWT doesn’t recognize typed input.

	
// fires only key events (works for most GWT inputs)
// Useful if WebDriver sendKeys() or Selenium type() aren't cooperating.
def typeWithEvents(String locator, String text) {
    def keyEvents = ["keydown", "keypress", "keyup"]
    typeWithEvents(locator, text, keyEvents)
}

// fires key events, plus blur and focus for really picky cases
def typeWithFullEvents(String locator, String text) { 
    def fullEvents = ["keydown", "keypress", "keyup", "blur", "focus"]
    typeWithEvents(locator, text, fullEvents)
}


// use this directly to customize which events are fired
def typeWithEvents(String locator, String text, def events) {
    text.eachWithIndex { ch, i ->
        selenium.type locator, text.substring(0, i+1)
        events.each{ event ->
            selenium.fireEvent locator, event
        }
    }
}

Note that the exact method that works will have to be figured out by trial-and-error and in some cases, you may get different behaviour in different browsers, so if you run your functional tests against different environments, you’ll have to ensure your method works for all of them.

Conclusion

Hopefully some of you find these tips useful. There are similar tips out there but I wanted to compile a good set of examples and workarounds so that others in similar situations don’t hit dead-ends or waste time on problems that require lots of guessing and time.

If you have any other useful tips or workarounds, please share by leaving a comment. Maybe you’ll save someone having to work late or on a weekend!

Using mockFor() and HQL

In a previous post, we discussed how to actually go about combining mockFor() and mockDomain() when it comes to unit test support for .withCriteria. If your code uses the Gorm.createCriteria(), you’ll likely want to switch to .withCriteria to make it unit testable. We promised to cover using HQL as well so let’s do that now. Again, we’re assuming Grails 1.3.7.

The approach is unchanged from mocking criteria calls. We still need to use mockFor() to mock the static method usage. In our example, we’re using HQL because it’s easier to read and maintain than the equivalent criteria structure. Whatever your rationale, you should be able to follow along.

def defaultJobState = JobState.findAll(
    "from job_state where job_id = :jobId and default_state = :defaultState", 
    [jobId: jobId, defaultState : defaultState])
def orderedJobStates = JobState.findAll(
    "from job_state where job_id = :jobId and end_date >= :endDate order by effective_date", 
    [jobId :jobId, endDate : endDate])
... code for processing returned data

To make things interesting, our example shows two different HQL uses and named parameters. We’ll show how to differentiate between multiple uses and this can be applied to the criteria scenario as well. When we adopt our previous approach to HQL usage, we get the following unit test code.

    ...
    def testJobs = [defaultJobState, oldJobState, currentJobState]
    mockFor(JobState, testJobs)
    def mock = mockFor(JobState)
    mock.demand.static.findAll(1..5) { hqlString, params ->
        if (hqlString.contains("default_state")) {
            testJobs.findAll{ testJob ->
                testJob.defaultState == params.defaultState
            }
	} else if (hqlString.contains("end_date")) {
            testJobs.findAll{ testJob ->
                testJob.endDate >= params.endDate
            }
        } else {
            []
        }
    }

As you can see, we’re effectively implementing the equivalent to the HQL. We’re making some assumptions about our expected HQL usage – that we are only testing for two usages and that no other usages will be called or can default to no data and that no other uses, if called, will give false matches on our string checks. In your case, you would probably want to make a constant out of the HQL strings and then do exact matches against those values in your test. You might also want to enhance your use to throw an exception in your else block to protect against unexpected uses. Use your testing experience to make sure yours tests are correct and adequate.

Swapping out Spring Bean Configuration at Runtime

Most Java developers these days deal with Spring on a regular basis and there are lots of us out there that have become familiar with its abilities as well as its limitations.

I recently came across a problem that I hadn’t hit before: introducing the ability to rewire a bean’s internals based on configuration introduced at runtime. This is valuable for simple configuration changes or perhaps swapping out something like a Strategy or Factory class, rather than rebuilding of a complex part of the application context.

I was able to find some notes about how to do this, but I thought that some might find my notes and code samples useful, especially since I can confirm this technique works on versions of Spring back to 1.2.6. Unfortunately, not all of us are lucky enough to be on the latest and greatest of every library.

Scope of the Problem

The approach I’m going to outline is meant primarily to target changes to a single bean, though this code could easily be extended to change multiple beans. It could be invoked through JMX or some other UI exposed to administrators.

One thing it does not cover is rewiring a singleton all across an application – this could conceivably be done via some reflection and inspection of the current application context, but is likely to be unsafe in most applications unless they have some way of temporarily shutting down or blocking all processing for a period while the changes are made all over the application.

The Code

Here’s the sample code. It will take a list of Strings which contains bean definitions, and wire them into a new temporary Spring context. You’ll see a parent context can be provided, which is useful in case your new bean definitions need to refer to beans already configured in the application.

public static <T> Map<String, T> extractBeans(Class<T> beanType, 
   List<String> contextXmls, ApplicationContext parentContext) throws Exception {

   List<String> paths = new ArrayList<String>();
   try {
      for (String xml : contextXmls) {
         File file = File.createTempFile("spring", "xml");
         // ... write the file using a utility method
         FileUtils.writeStringToFile(file, xml, "UTF-8");
         paths.add(file.getAbsolutePath());
      }

      String[] pathArray = paths.toArray(new String[0]);
      return buildContextAndGetBeans(beanType, pathArray, parentContext);

   } finally {
      // ... clean up temp files immediately if desired
   }
}

private static <T> Map<String, T> buildContextAndGetBeans(Class<T> beanType, 
               String[] paths, ApplicationContext parentContext) throws Exception {

   FileSystemXmlApplicationContext context = 
      new FileSystemXmlApplicationContext(paths, false, parentContext) {
         @Override  // suppress refresh events bubbling to parent context
         public void publishEvent(ApplicationEvent event) { }

         @Override 
         protected Resource getResourceByPath(String path) {
            return new FileSystemResource(path); // support absolute paths
         }
      };

   try {
      // avoid classloader errors in some environments      
      context.setClassLoader(beanType.getClassLoader());
      context.refresh(); // parse and load context
      Map<String, T> beanMap = context.getBeansOfType(beanType);

      return beanMap;
   } finally {
      try {
         context.close();
      } catch (Exception e) {
         // ... log this
      }
   }
}

If you look at buildContextAndGetBeans(), you’ll see it does the bulk of the work by building up a Spring context with the supplied XML bean definition files. It then returns a map of the constructed beans of the type requested.

Note: Since the temporary Spring context is destroyed, ensure your beans do not have lifecycle methods that cause them to be put into an invalid state when stopped or destroyed.

Here’s an example of a Spring context that might be used to rewire a component. Imagine we have an e-commerce system that does fraud checks, but various strategies for checking for fraud. We may wish to swap these from our service class without having to stop and reconfigure the application, since we lose business when we do so. Perhaps we are finding a specific abuse of the system that would be better dealt with by changing the strategy used to locate fraudulent orders.

Here’s a sample XML definition that could be used to rewire our FraudService.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd">
<beans>
   <bean id="fraudStrategy" class="com.example.SomeFraudStategory">
      <!-- example of a bean defined in the parent application context that we can reference -->
      <property name="fraudRuleFactory" ref="fraudRuleFactory"/>
   </bean>
</beans>

And here is the code you could use to rewire your bean with a reference to the defined fraudStrategy, assuming you have it in a utility class called SpringUtils:

public class FraudService implements ApplicationContextAware {

   private ApplicationContext context;
   // volatile for thread safety (in Java 1.5 and up only)
   private volatile FraudStrategy fraudStrategy;
   
   @Override // get a handle on the the parent context 
   public void setApplicationContext(ApplicationContext context) {
      this.context = context;
   }

   public void swapFraudStategy(String xmlDefinition) throws Exception {
      List<Sting> definitions = Arrays.asList(xmlDefinition);
      Map<String, FraudStrategy> beans = 
         SpringUtils.extractBeans(FraudStrategy.class, definitions, context);
      if (beans.size() != 1) {
         throw new RuntimeException("Invalid number of beans: " + beans .size());
      }
      this.fraudStrategy = beans.values().iterator().next();
   }
   
}

And there you have it! This example could be extended a fair bit to meet your needs, but I think it shows the fundamentals of how to create a Spring context on the fly, and use its beans to reconfigure your application without any need for downtime.

Evolving Document Structures with Morphia and MongoDB

In my previous post on Morphia, I went through some typical usages and mentioned some caveats and workarounds for known problems. I showed how easy it is to work with Morphia and how cleanly it interacts with the Java world.

To follow up on that post, I’m going to discuss how to deal with some real life needs: handling changing schemas and customizing your mapping to handle things like read-only fields and replacing simple fields with complex objects.

Changing Schemas

As nearly anyone who has worked with databases in the development world knows, schemas are always evolving. Fields get deprecated or outright dropped, tables become obsolete, new fields are added, and so on.

While a lot of this pain is avoided by using a schemaless datastore like MongoDB, sometimes we still do need special handling for changes, and in the case of Morphia, we essentially have defined a schema, so we do have to find ways to deal with this. The nice part about it is that Morphia makes it very clean and easier than you’ll see in just about in any ORM.

Deprecating Fields

One good example is a deprecated field that has been replaced by another field. Let’s imagine you have a bug tracking system with documents that look something like this:

{
  _id:1,
  desc: "IE Rendering broken on intranet site",
  componentName: "INTRANET",
  dateCreated: ISODate("2011-09-06T20:52:50.258Z")
}

Here is the Morphia definition:

@Entity("issues")
class Issue {
  @Id private long id;
  private String desc;
  private String componentName;

  private Date dateCreated = new Date();
}

Now imagine at some point we decide to do away with the component field and make it a more generic free text field where users can enter multiple components, versions, or other helpful information. We don’t want to just stick that in the component field, as that would lead to confusion.

Thankfully, we have a something in the Morphia toolkit that is made exactly for this – The @AlsoLoad annotation. This annotation allows us to populate a POJO field with one of multiple possible sources. We simply update our Morphia mapping to indicate an old field name, and we can easily remove references to the old field without breaking anything. This keeps our code and documents clean.

@Entity("issues")
class Issue {
  @Id private long id;
  private String desc;
  
  @AlsoLoad("componentName") // handle old componentName field
  private String affects;

  private Date dateCreated = new Date();
}

So here we’ve defined automatic translation of our old field without any need to update documents or write special logic within our POJO class to handle documents differently depending on when they were created.

One important note: in this example, if both the affects field and the old componentName field exist, Morphia will throw an exception, so don’t try using this for anything other than deprecating fields, or perhaps populating a single field with two mutually exclusive properties.

Supporting Read-Only for Deprecated Fields

Another possibility is that you just have to support an old field in document that the application no longer writes. This is a very simple one: use the @NotSaved annotation. When you use this on a field, the data will be loaded but not written by Morphia.

In our previous example, we could just as easily have decided to just support display for the old field but not treat populate it into the affects field, so let’s alter our Morphia POJO a bit to show how @NotSaved is used.

@Entity("issues")
class Issue {
  @Id private long id;
  private String desc;
 
  private String affects;
  
  @NotSaved("componentName") // load old componentName field for display only
  private String componentName
  
  private Date dateCreated = new Date();
}

Replacing a Field with An Embedded Object

Now what if our componentName field had actually changed to a complex component object which has a name, version and build number? This is a bit trickier since we want to replace one field with multiple. We can’t attempt to load the field from multiple sources since they have different structures. Of course, we can use an embedded object to store the complex component information, but how can we make our code work seamlessly either way without having to update our documents?

In this case, the simplest approach would be to use a combination of three annotations. First we would mark the old field with the @NotSaved annotation, introduce a new embedded Component object using the @Embedded annotation, and finally take advantage one more annotation that Morphia provides – @PostLoad. This one lets us have a method that is executed after the POJO is populated from MongoDB.

Here’s the example:

@Entity("issues")
class Issue {
  @Id private long id;
  private String desc;
 
  private String affects;
  
  @NotSaved("componentName") // load old componentName to convert to component
  private String componentName
  
  @Embedded // our new complex Component
  private Component component;
  
  private Date dateCreated = new Date();
  // getters and setters ...
  
  @PostLoad
  protected void handleComponent() {
      if (component == null && componentName != null) {
        component = new Component(componentName, null, null);
      }
  }
}

class Component {
  private String componentName;
  private Long version;
  private Long buildNumber;
	
  public Component(String componentName, Long version, Long buildNumber) {
    // ...
  }
  
  // getters and setters ...
}

In this case, we could remove the getter and setter for the componentName field, so that our mapped object only exposes the new and improved interface.

Conclusion

By using the powerful tools that Morphia gives us through its annotation support, we can meet these goals:

  1. Let our document structure adapt with the application and stay clean.
  2. Seamlessly handle changing structure in our Java code without error-prone code.
  3. Expose only the new schema while supporting the old (truly obsolete the old code and fields.

Hopefully this helps a few of you out with adapting to evolving documents, or at least to become more familiar with the abilities some of these Morphia annotations give you.

Combining mockDomain() and mockFor() in Grails

As we’ve mentioned before, anything you can do to make automated testing easier in your Grails project will help you achieve one of the primary goals of the platform – high productivity.

Since you’ve chosen the Grails platform, you’re likely making good use of its features such as GORM, plugins, convention based spring wiring and Config.groovy/ConfigurationHolder. These all help you get to the business of writing your code without having to deal with plumbing all the time. Unfortunately the byproduct is that these all can make automated testing quite painful. Many times you end up going with integration tests simply because these give you access to the free plumbing even though you really just need to write unit tests. I won’t bother here debating and comparing unit vs. integration tests since this has been discussed many times before. We’ll just assume we want to write unit tests.

Grails (assuming 1.3.7) does provide the mockDomain(class, [instances]) which gives you an available “database” of objects in memory as provided to the mockDomain method without the need for a running container – perfect for unit tests. Unfortunately, as you start writing unit tests and encountering some issues, you come across this little gem in the documentation

… does not support the mocking of criteria or HQL queries

. That’s too bad really. You’d think they could easily support it, even if they just used an actual in memory database. Anyway, the documentation then adds

If you use either of those, simply mock the corresponding methods manually (for example with mockFor() ) or use an integration test with real data.

Sure, we could write an integration test, but we really don’t want to and shouldn’t have to. What about that mockFor() approach? Well you’re likely here because nobody has documented actually how to do so. We’re here to help. Here we’ll cover off combining mockDomain() with mockFor() for criteria in this post. In a future post, we’ll take a look at HQL queries.

It turns out to be not too bad. The use of closures leaves a little to be desired, but that’s not specific to this problem as it’s common to all mockFor(Domain).demand usage. Our example will assume you are using withCriteria to do something like find a range of data. We’ll also show you how you can provide other free behaviour via mockFor() – such as provided by a plugin.

We’ll have a domain class Order looking for orders placed within a specific date range and then provide mock behaviour for toDTO(dtoClass).

def order1 = new Order(orderId : 1, date : new Date(), amount : 19.99)
def order2 = new Order(orderId : 2, date : new Date(), amount : 39.99)
def order3 = new Order(orderId : 3, date : new Date(), amount : 49.99)
def order4 = new Order(orderId : 4, date : new Date(), amount : 99.99)
def orders = [order1, order2, order3, order4]
def mocker = mockFor(Order, true)
mocker.demand.toDTO(1..4) { clazz ->
    return new OrderDTO(orderId: delegate.orderId, date: delegate.date, amount: delegate.amount)
}
mocker.demand.static.withCriteria(1) {criteriaClosure ->
    def found = []
    orders.each { order ->
        if (criteriaClosure.from >= order.date && criteriaClosure <= order.date) {
            found << order
        }
    }
    return found
}
mockDomain(Order, [order1, order2, order3, order4])

Let's review what we've done. We create some test order instances for use both from our "database" and our mockFor / withCriteria logic. mockFor() gives us our handle to the mock that we can now set our expectations against since we're not really worried about testing against the mock and verifying its usage, we just need it to substitute for the database. toDTO is the free method you get with the DTO plugin. Use the mocker.demand as you would any grails mock. Then we substitute for the withCriteria code. Just make sure you use demand.static for withCriteria. We apply the logic against our collection of test instances (could just as easily have used the mocked instances available via mockDomain by doing Order.list().each) and now have a valid functioning "database" that supports criteria in unit tests.

Of course this doesn't actually test that your withCriteria is implemented correctly. The focus here was to establish an expected set of data to test against. You would have to write integration tests to actually test these types of methods on your model classes.