There's more than one way to deploy your Salesforce releaes, and each has its own set of advantages and disadvantages. Change Sets are easy to use but a bit of a pain to deploy because they have to be pushed and released in each environment separately, which makes it hard to be sure you're in Sync. ANT is powerful, and there aren't a lot of people who would argue that full autoamted pipelines using ANT and Jenkins are the way to go, but what if you don't have the resources? Tools like Gearset are great, but proprietary and cost money.
Using ANT manually can make keeping your package files up to date a bit of a pain--but with a little local ANT scripting you can solve this problem, and get many (but not all) of the advantages of an automated pipeline without having to spend a cent.
The General Idea (or, Why Use Change Sets at All)
One of the disadvantages of ANT is that you need someone to maintain a package.xml file that contains a list of what's being deployed. This sounds easy at first but in the real world there are lots of little challenges associated with it--as your release approaches, for example, work that's not finished has to be removed. This often requires clear communication between developers and execution at a time when everybody's in a bit of a rush. Easier said than done. This can lead to deployment errors that then need to be resolved.
The idea here is to use a standard Salesforce Change Set to generate your package file everytime it's deployed. The advantage is that each developer can manage their own components in the Change Set, without having to tell the Release Manager that something's been removed.
To do this we write an ANT script that:
- Erases the contents of the package directory
- Retrieves the contents of a change set with a standard, clearly defined name
- Uses that list to retrive all of the components to the package directory
- Pushes the contents to all necessary target environments
It's really quite easy to set up, and creates an easy to replicate release process that could be easily automated as a Unix CRON job if you wanted.
Setup Your Directories
You can use whatever directory strategy you want for ANT, but you'll need to adjust the paths in the script below to suit what you've done. These assume you have a root directory which contains your scripts and build files, and a src directory inside that which is where we'll be downloading to and deploying from.
The result should look something like this:
This what the ANT script looks like in your build.xml file.
If you've used ANT with Salesforce before, this should look familiar: the first block just creates a command that retrieves from your developer sandbox and the second block pushes that to another--in this case we're calling that test.
This might not look so familiar and there's two key things to note here:
- The packageNames="All Changes" parameter means that this is going to retrieve the contents of a change set named All Changes in the source sandbox--in this case, our developer integration sandbox.
- Our retrieveTarget parameter is slightly different--we don't want this file to land in our src directory
What this does is retrieves a well formed, perfectly structured package.xml file containing the contents of the change file we named. It's important that the change file name is unique--you'll get an error if it's not.
Putting It All Together
These three steps are the building blocks we're going to use to run an ANT script. They take advantage of some of ANT's existing features including:
- The ability to manipulate files and directories using ANT commands, which are not operating system specific
- The failonerror element which stops a deployment if there is an error received. You can set this to false if you'd like, but you need to pay close attention if you do--if something doesn't go properly, everything just keeps running and your environments may go out of sync
This very short script does a few things
- The <echo> elements are entirely optional, but can make reading your output easier
- The first two lines delete the entire contents of your src directory. In this case we're treating the developer sandbox as our master environment. While a lot of people treat a git repo this way I think this is a bad idea, for reasons I'll elaborate on later. The short version: there should always be a developer sandbox that reflects your current git repo.
- The next block retrieves the change set's contents as a file named package.xml. This version of this file will be saved in the same directory as this script
- Next we retrieve the actual components from the dev environment. This will repopulate our src directory with only the components that are currently in the change set. (Excess components can cause deployment errors, though not critical failures.)
- The next two blocks run our deployment--in this case we're actually going to push to two environments. These deployments use the package.xml file that was just
To execute this script you're going to type ant push-to-full at the command line. The name is set in the first target element and you should decide on a standard naming convention to help reduce confusion when you have multiple scripts.
If you're not worried about the separate failonerror values you can combine the the exec statements into a single one like the one below. Just keep in mind it will all fail.
Things to Consider
This is a fairly straightforward example that's meant to get you started. If you don't have the resources to put together a fully automated pipeline, this script might help make your deployments easier and keep your environments in sync.
Some things you might want to consider doing:
- Add a test2 or a test3 enironment. It's a very straightforward thing to do, and gives your a backup environment that may be more stable than your primary test environment and may enable multiple test streams to happen simultaneously
- Integrate this with GIT to automate the push and pull process to your repo. I'll write a bit more about this in another post.
- Add a step that validates against production (or create a different branch in the scrip that can be called to validate production)
Is This Automation?
No, not fully--as I said up front, nothing quite matches automated build pipelines and Jenkins is the tool to make that happen. If you're a lone developer though, this accomplishes a lot of the same goals without having to get Jenkins up and running. You may also be working in an environment where getting a Jenkins server up is a challenge--and this can all be done from your desktop. You can also work with other developers easily by letting them add things to your All Changes change set without having to check with you first.
This may not be the perfect solution, but it might do a good job of filling a gap in your existing workflow.