Part 4: Adding in the camera

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Now I have the Raspberry Pi running Node-RED and can control a pushbutton and LED.  Cool.  And then, the mail carrier came!  My camera module arrived!

Camera module installation is pretty easy.  Just follow the Raspberry Pi Documentation.  There is an easy command-line utility called raspistill that I used to test out the camera.

Now there is no out-of-the-box node to control the camera module from Node-RED.  I never found any Googling around the internet either.  But that’s OK – this gave me an opportunity to see how extensible Node-RED is.  I would have to integrate the Raspicam Node.js library into function nodes.

Making Raspicam accessible in Node-RED

First, I had to install the raspicam module into Node.js.  Npm package manager makes that easy enough, but again, make sure you are in the right directory.

cd ~/.node-red
npm install raspicam

Now, that makes raspicam accessible in Node.js, but there is an additional step required to make it accessible within Node-RED.  See the Global Context section in the Node-RED Documentation.  In ~/.node-red/settings.js, I added

functionGlobalContext: {

Bounce Node-RED and you should be able to access raspicam commands inside Node-RED modules like

 var camera = new opts );

Adding a function node

Now I’m ready to add some code to control the camera.  This is done with a function node.

  1. Drag a function node onto the canvas and connect it like this
    Function Node added to test flow

    Function Node added to test flow

    A couple things to note here.  First, you can have more than one node wired to a node’s input.  In this case, both the Pin 12 node and the function node will send information to the msg.payload debug node.  We already saw this in the previous post because the Pin 12 node and timestamp injection node send output to the trigger node.  Secondly, you can have more that one node take input from a node’s output.  In this case, the trigger node will pulse the LED, but it will also send the trigger to the function node, which we will use to take a picture.

  2. Double-click the function node.  Name it Take Picture.  Paste the following code into the Function area:
    // We only want to take pictures when 
    // the button is pressed, not when it is released. 
    if (msg.payload == 1) { 
        var encoding = "png"; 
        var currTime = new Date().getTime();
        // Use the current timestamp to ensure
        // the picture filename is unique.
        var pictureFilename = "/home/pi/pictures/" + currTime + "." + encoding;
        var opts = {
            mode: "photo",
            encoding: encoding,
            quality: 10,
            width: 250,
            height: 250,
            output: pictureFilename,
            timeout: 1};
        // Use the global RaspiCam to create a camera object.
        var camera = new opts ); 
        // Take a picture
        var process_id = camera.start( opts ); 
        // Send the file name to the next node as a payload.
        return {payload: JSON.stringify(
            {pictureFilename : pictureFilename}) };
  3. Deploy the flow.  You see two messages in the debug view. The first is the message from the Pin 12 node.  The second is the message created by the “Take Picture” node.  That node has told the camera to take a picture (if you were paying attention, you would have seen the red light on the camera module flash) and has sent a message with the filename to the debug node.
    Deploy messages

    Deploy messages

    There is a problem here.  The Pin 12 node is configured with “Read initial state of pin on deploy/restart?” checked.  This is causing my flow to trigger and a picture to be taken when I don’t want it to.  I’ll fix that before verifying the camera worked.

  4. I’ll double-click the Pin 12 node and deselect “Read initial state of pin on deploy/restart?” and Deploy again.  This time I get no new debug messages.
  5. Press the pushbutton on the breadboard.  The camera light will blink as it takes a picture and the LED connected to Pin 22 will pulse for 2 seconds.
  6. I’ll open up a VNC session, find the picture file in /home/pi/pictures with the name in the debug message and open it.  The picture isn’t all that exciting.  My Pi camera is sitting on my desk, pointing toward my laptop monitor.  But it proves that things are working.

    Viewing picture on Pi VNC

    Viewing picture on Pi VNC

Ok, so now I have a Raspberry Pi and a Node-RED flow triggered by an external button that will take a picture using the Pi camera module and store the picture in a file on the device.  I’m making progress but so far I only really have a “thing”, not an “internet of things”.  In the next post, I’ll set up my Node-RED environment on Bluemix so I have something to communicate with.

Part 3: My first Node-RED flow on the Pi

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

In previous posts, I have setup the hardware and software on my Raspberry Pi.  Now it is time to create a Node-RED flow to manage my hardware.

Node-RED flow editor

Node-RED’s flow editor is browser based.  Once you have Node.js running with the Node-RED package installed, browse to your flow editor at http://your.pi.address:1880.  The Node-RED flow editor is quite simple to use.  On the left is a palette of nodes that can be added to your flow.  In the center is the canvas where you will create your flow.  On the right you see two tabs.  The Info tab will display documentation for any selected node.  The Debug tab will become really important shortly as we start putting together a flow and want to see what is going on.

Node-RED Startup Screen

Node-RED Startup Screen

Button flow

The first flow I created was a simple one to verify my Node-RED environment was seeing the button state changes.

  1. I dragged the rpi-gpio in node to the canvas.  This node isn’t in the out-of-the-box Node-RED but was added by the node-red-contrib-gpio node I added.  Note all the information about the node displayed in the Info tab on the right.
  2. Double-click the node to open its configuration dialog.  Set the values as below and close it

    Input pin configuration

    Input pin configuration

  3. Drag the debug output node to the canvas and connect them as below.
    Input flow

    Input flow

    The blue dots on the nodes indicate that these changes have not yet been deployed.

  4. Select the debug tab on the right side of the screen.
  5. Click the red Deploy button in the upper right.  You should see a ‘0’ payload displayed in the debug window.  The input node was configured to read the initial state of the pin when the flow was deployed.  That caused a message to be sent from the input pin node to the debug node which displayed the payload in the debug window.
  6. Press and release the button on the breadboard.  You should see a transition to a ‘1’, and then a transition back to ‘0’.

    Button press debug messages

    Button press debug messages

 LED flow

Next I wanted to create a simple flow that would enable me to manually control the LED state from within Node-RED.

  1. Drag the Inject node onto the canvas.  The inject node enables you to manually interact with the flow.  By default, it simply sends the current timestamp as a payload in the outgoing message.
  2. Drag the Trigger node onto the canvas.  Change the node’s settings to send a 1 then wait for 2 seconds before sending the 0.
  3. Finally, drag an rpi-gpio out node to the canvas and configure it for pin 22.
  4. Connect the nodes together like this:

    Output flow

    Output flow

  5. Deploy the flow.
  6. Click the little tab on the left side of the Inject node.  Your LED should light for 2 seconds, then go off.

Combined flow

Now, let’s combine them into a single flow.

  1. Connect the output of the Pin 12 node to the input of the trigger node.
  2. Deploy.
Combined test flow

Combined test flow

Now the LED can be triggered by either clicking on the timestamp node or by pressing the pushbutton connected to pin 12.

So now I have a Raspberry Pi with a pushbutton and LED that I’m controlling with Node-RED.  In the next post, I will install the camera and see what I need to do to control it with a Node-RED flow.

Part 1: Setting up the Raspberry Pi hardware

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

A couple days later, my vigil at the mailbox was rewarded – my Raspberry Pi starter kit arrived.  Since I was too cheap to pay for express shipping, the camera wouldn’t arrive for a couple more days.  That was fine with me – I had plenty to learn about the Pi before dealing with the camera.

Unpacking and firing up the Raspberry Pi

I pulled out a spare USB keyboard, mouse and network cable and plugged the Pi into an HDMI monitor.  My starter kit came with a micro-SD card pre-loaded with the NOOBS (New Out Of the Box Software) so I inserted that.  I plugged in the device and away it went.  Initializing the Raspberry Pi was extremely easy.  I went with the recommended Raspian OS and had the GUI running in no time.


My starter kit came with a wifi USB dongle.  I plugged that in and configured the wifi settings through the GUI.

Wifi Configuration in the Raspberry Pi GUI

Wifi Configuration

With the Pi on my home wifi network, I could shell into it from my Macbook.  That’s handy because having an extra keyboard and mouse cluttering up my already cluttered desk was going to be a pain.  Now I could talk to the device without needing an external monitor or peripherals.


Being an old hardware guy, I went straight for the breadboard and circuit components that came with my starter kit.  I set up a push button with a 10k pull-down resistor that I could use as an input and a LED with a 2.2k series resister I could use as an output for testing.  My kit even came with a GPIO to Breadboard Interface Board which made it easy to connect the Pi to the breadboard.  I connected my pushbutton circuit to BCM_GPIO 18 on physical pin 12 and my LED circuit to BCM GPIO 25 on physical pin 22.  Here is a view of the circuit:

Test circuits on breadboard

Test circuits

Ok, now I have all the hardware configured.  I just need to figure out how to control it with software.  That’s a task for the next post.

Overview: My Internet of Things and MobileFirst adventure


UPDATED:  I’ve done some follow-up work on this project and rather than keep pasting new additions and changes onto the existing content, I have decided to rewrite the series to better reflect the process from start to finish.  If you have read the series before, you may fine some interesting new aspects added and others removed.  Some of the old content was no longer applicable due to updates in Bluemix services.  I also went a little off the deep end with an Apple Watch. 🙂

I am diving into a new project where my eventual goal is to have a mobile app on my phone communicate to a “thing” on the internet.  Yes, those are pretty poorly defined requirements so right off the bat I did some research to decide what the “thing” should be, what amazing task it should perform and how I should make it talk to my phone.

Due to its popularity and just because I thought it sounded cool, I decided my “thing” would be a Raspberry Pi.  These are cheap little single board computers that can run a number of open source operating systems and application packages.  With that decided, I ordered my Raspberry Pi starter kit and anxiously waited at the curb by my mailbox.

But what cool task would it do?  I wanted something related to home automation that was more than pressing a button and lighting a lamp, but I also wanted to keep it reasonably scoped.  I decided on a doorbell interface that would take a picture when the button is pressed and make that picture available to the mobile app.  Ok, for that I would need a camera module so I ordered that and went back to the curb by the mailbox to wait.

Now I knew what the thing would be and what task it would perform.  But how would it communicate to the mobile app?  After a little more research, I decided to leverage the IBM Bluemix platform and services it provides. Hey!  What a coincidence:  I work for IBM!  Ok, maybe it isn’t such a coincidence.  My plan all along was to demonstrate how I could use Bluemix and the IBM MobileFirst Services to build this app.  But in this process, I have learned a ton about non-IBM technology as well so this adventure is not at all just an IBM sales pitch.  But it will demonstrate how you can build applications leveraging cloud-based services and the Internet of Things.

While I was still waiting at the mailbox, I sketched out this architecture diagram for the eventual system:

Yes, this does look overly complicated for a doorbell.  But my not-so-hidden agenda here is to demonstrate how all these cloud and IoT components can come together to provide services for a mobile application.  There is lots to learn here, even if you don’t want to get into the whole system, so feel free to cherry pick whatever helps.

There are four main pieces of functionality in this project:

  1. Push Notifications
  2. Picture capture
  3. Video capture
  4. Apple Watch

I ended up doing a LOT of iterating.  I am going to spare you some of that and present the tasks in a more sequential fashion so you may find there are steps that don’t seem obvious at first, but hopefully you will see why I had to do them later.

  1. Raspberry Pi Setup
    1. Set up the Raspberry Pi hardware
    2. Install the Raspberry Pi software
    3. Creating my first Node-RED flow on the Pi
    4. Add in the camera
  2. Bluemix Setup
    1. Set up the Bluemix application environment
  3. Implementation
    1. Enabling Push Notifications
    2. Requesting and Receiving a Picture
    3. Watching video from the Pi on the Phone
    4. Apple Watch

Part 2: Raspberry Pi software

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

With the hardware all set up, I was ready to dive into the software on the Raspberry Pi.  I quickly learned that there is a bunch of software available for the Raspberry Pi.  Most of it is really easy to install with ‘apt-get’ command and npm.  I also discovered that the Raspberry Pi Documentation is really helpful.  Here are a few things I started with:


I knew I wanted to run the GUI from my MacBook and not require an external monitor.  I followed the instructions in the Raspberry Pi Documentation pretty much exactly as written.

I already had VNC Viewer on my MacBook.  I connected to the Pi and had GUI on my MacBook in no time.


Git is a very popular open source version control system.  I could already see I was going to need Git on the Pi in order to get some code I would need, so I installed with

sudo apt-get install git-core


But, I still don’t have a way to monitor my pushbutton or control my LED circuits.  After a little Googling, I came across wiringPi, an access package for the GPIO interface.  It also comes with a simple command line interface, so I would have a way to test out my circuits.  I installed wiringPi using the instructions on the wiringPi install page.

Now, I could go to my command prompt and run gpio commands.  The pin numbering used by wiringPi is not obvious.  There is a whole history there but I’m not going to get into that.  I could read the state of the pushbutton using

gpio read 1

I could control the LED with the following:

gpio mode 6 out
gpio write 6 1
gpio write 6 0


Yes, you can install Eclipse on the Raspberry Pi!  My architecture did not include a Java program written in the Eclipse IDE, but I tried it anyway.  Again, installation is pretty easy with the package manager:

sudo apt-get install eclipse

There are a couple prerequisites, though.


It is best to get the full Oracle Java 7 JDK using the Raspberry Pi Documentation.


Pi4J is an API that gives you simple access to the GPIO pins on the board.  Install it with

curl -s | sudo bash

I followed Ian Bull’s tutorial with a few minor modifications to create a simple Java program that monitors the pushbutton and controls the LED.  But to get it to run, I ran into another issue – the underlying wiringPi must be run as root.

Configure Eclipse to run programs as root

There is a really clever Stack Overflow on how to do this.


Ok, now that I was able to interface with the hardware using the command line and a Java program, I was ready to move on to bigger and (hopefully) better things.  I wanted to use Node-RED.  What is Node-RED?  I lifted this description from an IBM page:

Node-RED provides a browser-based UI for creating flows of events and deploying them to its light-weight runtime. With built in node.js, it can be run at the edge of the network or in the cloud. The node package manager (npm) ecosystem can be used to easily extend  the palette of nodes available, enabling connections to new devices and services.

A textual description doesn’t really do it justice.  You really need to see a flow to understand how powerful it can be, so let’s get it installed, but beware – there be dragons here.

Installing Node.js

I Googled around a bit and instead of following the instructions on, I jumped into a tutorial I found.  Unfortunately, that tutorial was outdated and lead me down a dead end.  As of this writing, Node-RED does not run on the very latest Pi-compatible version of Node.js.  I would definitely recommend you follow the instructions specific to the Raspberry Pi version you have from the Node-RED documentation.  I won’t repeat those instructions here.

Installing additional Nodes

Node-RED can be extended easily by installing additional nodes.  The best place to look is the Node-RED Library.  I installed node-red-contrib-gpio, but here is where another dragon appeared.  Being relatively unfamiliar with Node.js, I overlooked one very important line in the instructions: “From inside your node-red directory“.  In the case of a local Node.js instance, that means the hidden .node-red directory in my home directory on the Pi.  Until I figured that out, the nodes just wouldn’t show up in Node-RED.  So, the installation instructions should be:

cd ~/.node-red
npm install node-red-contrib-gpio

In the next post, I will look at how I created my first Node-RED flow on the Raspberry Pi before I stared interfacing it to the Internet of Things.

An example of using Git, IBM BlueMix DevOps Services and MobileFirst Studio

I introduced some basic concepts around using Git with IBM MobileFirst Platform in the last post.  This post will build on that through an example using IBM Bluemix DevOps Services as the host for my repository.

Getting started

First, go to If you don’t have a free account, create one using the SIGN UP button.

Sign Up Button

Once logged in, you will see a screen showing you all the projects you have on DevOps Servcies. Create a new project.

Create Project

If you have code in a GitHub project already, you can just link to it. Let’s assume you do not, so click Create a new repository and choose Create a Git repo on Bluemix. Accept the defaults to create a private project with Scrum features and don’t make it a Bluemix project (that’s a discussion for another day).

03 - create repo

You now have a project on DevOps Services to contain your project. The Git URL needed to reference this repository can be copied from the Git URL link on the upper right corner of the Git view. Do that now because you will need it shortly.

git url

The next steps are done from your local machine.

Adding a project to Git

From the client side, it is recommended to NOT store your projects within the Eclipse workspace. Unfortunately, the Studio New Project wizard doesn’t give you the option to create the project anywhere else. Fortunately, Eclipse projects are portable and can be moved around easily. The easiest way I have found to get started is to

  1. Create a local Git repo using MFP Studio
  2. Create an MFP project
  3. Move the project into the Git repository
  4. Create your .gitignore file
  5. Put everything under control
  6. Push your changes to the DevOps Services server

Let’s quickly walk through that.

Create a local Git repo

  1. Open MF Studio to a new workspace.
  2. Open the Git Perspective
  3. In the Git Repositories view, choose Clone a Git repository.
    When asked to select a destination, pick a spot on your file system where you will store Git repositories, say /git and then add a folder within it with the name of your project (i.e. /Users/dennis/git/mySampleProject). Now you have an empty Git repository that references the one on DevOps services as its “remote”.

Create an MFP project

  1. Right-click on the Working Directory node in the Git Repositories view and select Import Projects. Select Use the New Projects wizard.
  2. Walk through the wizard to create a project named mySampleProject containing a hybrid app named mySampleApp, just as you normally would.
    create project2
  3. Create an Android environment in the mySampleApp project.

Move the project into the Git working directory

Unfortunately, MFP Studio simply creates the project within the root of the workspace, which is not what you want. Fortunately, it is easy to move.

  1. Switch to another perspective such as J2EE or Design.
  2. Right-click the project and select Refactor > Move to move it within your Git repository. Note that you cannot put it directly in the Git repository root. Create a subdirectory such as /Users/dennis/git/mySampleProject/mySampleProject.

Create your .gitignore file

  1. Switch back to the Git perspective and hit refresh in the Git Staging view. You should see that you have about 217 files staged, which means they are available to add to SCM. But remember we don’t necessarily want all of them in there.
  2. Go to and download the MFP_7.0.gitignore file.
  3. Rename the file to just .gitignore and place it at the root of your project.
  4. Go back to the Git Staging view, refresh, then drag the .gitignore file from Unstaged Changes to Staged Changes.
  5. Provide a commit message like “Setup .gitignore file”.
  6. You should see the Unstaged Changes file count drop to something like 109 because the .gitignore file is filtering out the stuff you don’t want committed.

Put everything under SCM control

  1. Select all files in Unstaged Changes and drag them to Staged Changes.
  2. Provide a commit message like “Project Load” and commit.

At this point, everything is under Git control, but only on your local repository.

Push your changes to DevOps Services

  1. In the Git Repositories view, right-click mySampleProject/Branches/Local/master and select Push Branch.
  2. Accept all defaults. When the operation finishes, go back to your DevOps Services project and refresh the browser. You will now see your project on the master branch on the server.
    project on server


Managing source code is critical in any project including MobileFirst projects. It takes a few steps to get setup with Git, but it will pay you dividends quickly.

Source Code Management with IBM MobileFirst Platform

This is a high level overview of managing the source code of your mobile application project in the IBM MobileFirst Platform (MFP). You should come away with enough foundational knowledge to start implementing Source Code Management (SCM) in your own project.

Why manage source code?

There are many varied reasons for using an SCM system. Some reasons are more relevant to developing code as part of a team. SCM helps:

  • Share code with others
  • Coordinate and share changes with others
  • Understand what work has been done, when, where and by whom

But, SCM provides a lot of value, even if you are a development team of one. SCM helps:

  • Organize your changes into a series of steps
  • Identify the codebase for all releases
  • Compare versions of code
  • Review history of changes
  • Compartmentalize or package specific feature or bug fixes for distribution

There are many, many reasons why SCM makes sense and those reasons apply to mobile development project as much as any other type of project.

SCM and the IBM MobileFirst Platform

There are basically two ways you can develop software for the IBM MobileFirst Platform: using MobileFirst Studio (hereafter referenced as simply “Studio”) or the MobileFirst Command Line Interface (CLI). The concepts I will be discussing apply to both. Most SCM tools provide a CLI that can be used to issue necessary commands to commit your changes, create branches, etc. if you are not using Studio.

Studio is based on the Eclipse IDE. This makes it possible to use any Eclipse SCM tool integration directly from the IDE. These Eclipse integrations provide some really useful graphical tools that help you visualize the changes to your code and merge conflicts when necessary. If the IDE integration doesn’t do everything you need, you can revert to the CLI when you have to.

For this article, I am going to focus on Studio. I’m also going to focus on the SCM system Git. Git is quite popular these days as it is lightweight and open source. The concepts I will discuss in Git can be applied to other SCM systems as well.

IBM provides some guidance on source control for Mobile First Platform in the Knowledge Center at This page is the place to start when you are considering implementing SCM on your project. The hierarchy diagram depicts the structure of a MobileFirst project. It is important to understand this because not all files in the project should be put under SCM control.

Types of files in MobileFirst projects

Git provides a mechanism using files called .gitignore that enables you to ignore or just not manage certain files. Why wouldn’t you control all files in the project? Let’s break the project down into categories of files to understand that better.

Source Files

Source files are files that you create or modify to build your application. These would be html, css, JavaScript or other types of files that contain the guts of your application. You definitely want to control these files because if you were to lose them, you would lose functionality. A good example is all the content within the common folder which would be the code for the hybrid application you are building. That’s fairly straightforward.

Derived Files

The IBM documentation doesn’t use the term “derived” file. That’s my own term for files that are produced or generated by MFP when you perform an operation such as build or deploy. The www folder within the native folder of each environment is a good example. When developing a hybrid application, MFP enables you to create code once, but use it in all your environments. The tools used to build the app (such as the SDK for Android or Xcode for iOS environments) might require all hybrid content to exist within the structure of the native folder. The hybrid content from the MFP common folder is copied to the environment’s www folder during the build phase. Therefore, there is no benefit to managing the content of the www folders within the environments. That content just gets overwritten with each new build. If you accidentally deleted it, nothing is lost.  It is best to just ignore all these files and not put them under SCM control.

Derived but Required Files

Things get a little trickier when you start considering how you will share your project with others. There is a category of files whose contents are derived, but MFP only generates them when the environment is first created. The files are not generated or copied with each build or deploy. However, these files are required to exist by the build tools such as Xcode. If you don’t include them in SCM and your teammate grabs a copy of your code, the project won’t build without them.

Let’s look at an example. I am building a hybrid app named mySampleApp in an MFP project named mySampleProject that will run on iPad. When I first create the iPad environment, MFP creates the file mySampleProjectmySampleAppIpad-Info.plist in the iPad native folder. This file contains, among other things, the app version number string. If I change the app version number in the application-descriptor.xml file and rebuild, the *Info.plist file is changed and I have a new version to check into in SCM. I would like to avoid that since the version number in the *Info.plist file is derived. But, if I don’t add the *Info.plist file into my SCM system and my teammate tries to build without this file, the build will fail.  The file is not automatically created by MFP if it doesn’t exist.

This leads you to a bit of a quandary: you don’t want to control these derived files, yet you need to add them to source control in order to maintain a complete, buildable project.

The best approach is to put them under source control and live with the minor consequences. Those consequences are that each time certain changes are made by developers, those files show up as changed. You can often decide to do nothing with the change and you will be fine because the build will regenerate the content within the file.


As mentioned earlier, Git provides a way to specify files you want to ignore. You create one or more .gitignore files and include the list of derived files you don’t want to be put under control. Git supports the notion of multiple .gitignore files spread across the file system, the effects of which are cumulative. As a best practice, I would recommend against this. Keeping a single .gitignore at the root of your MFP project is sufficient and it makes it obvious where to go if you need to make changes to the .gitignore file.

Andrew Ferrier has created a great template .gitignore file ( You can download and use the file freely and he welcomes any feedback or improvements.

Cleaning up the mess of a bad start

You probably won’t get the .gitignore right the first time. That’s fine – you can tweak as you go. But, you need to know that adding a file to .gitignore does not remove it from Git. If the file was already in Git, the .gitignore file has no effect.

The best course of action in this case is to merge everything back into a single branch, fix the .gitignore file, then remove files from source control as needed. Andrew Ferrier has also created a tool that will help you automate the task ( ).

An effective branching model

Once you have your code in Git, you need to decide how you are going to work with it. Git is very flexible so you really need to establish some conventions and procedures so your entire team knows what is going on.

Branching and tagging conventions are probably the most important things to agree upon. Again, there are many ways you could do this, but one of the most widely accepted models is one proposed by Vincent Driessen several years ago ( ). This model may be more than you need on a small project, but there are lots of good ideas here you can glean.

Daniel Kummer has created some Git extensions that automate Driessen’s model ( ). This is nothing you couldn’t do through adhering to your established conventions, but it helps automate and enforce it.

In the next post, I will walk through an example using IBM DevOps Services as the master Git repository for the project.