“Containing” your MobileFirst server

IBM MobileFirst Platform Foundation version 7.1 adds support for running the platform in a container on the cloud in IBM Bluemix.  The idea is that anyone should be able to spin up a MobileFirst server in the cloud, deploy applications and adapters to it and reconfigure mobile apps to use it quickly and easily.  I thought I would give it a try with my Internet of Things Doorbell app and see how easy it really is.

Here is the architecture of my doorbell system.  The only parts that should be affected are the MobileFirst Platform Foundation (MFP) server component and the native iOS Mobile App itself.

Doorbell Architecture

Doorbell Architecture

MobileFirst 7.1 provides two options for moving to containers: the prebuilt Getting Started Image and the ability to roll your own image based with Evaluation on Containers.

The Getting Started Image is really meant for kicking the tires and evaluating MFP on Bluemix.  It is not at all intended for production.  The server, data proxy, analytics server, database and a sample application are all crammed into one container.  You also cannot make any runtime changes such as renaming the prebuilt runtime, adding more runtimes or customizing the runtime such as adding back-end code, SSL certificates, etc.  The up-side is that you should be able to deploy the prebuilt image very quickly and easily add your own adapter and application.

The Evaluation on Containers is meant for building production deployments.  Why it is called “Evaluation” when Getting Started is really for evaluations is a mystery to me.  At any rate, Evaluation on Containers is not a prebuilt container.  It is actually a “package” of artifacts you use to build your own custom container in Docker.  You build the container locally on your machine then push it up to Bluemix Containers to run it.

For today’s experiment, I’m going to use the prebuilt Getting Started Image.

Create the MFP container

Creating the container is similar to creating an application on Bluemix.

  1. Log into Bluemix at https://ibm.biz/IBM-Bluemix.  If you don’t yet have a Bluemix ID, get one for free by clicking the SIGN UP link in the upper right corner of the landing page.
  2. From the Dashboard, click START CONTAINERS.

    Start Containers

    Start Containers

  3. On the next page, there will be a list of all the available prebuilt container images.  Click ibm-mobilefirst-starter.  This is the Getting Started Image.

    ibm-mobilefirst-starter container image

    ibm-mobilefirst-starter container image

  4. This brings up the creation page for the container.  There are a few things that need to be configured on this page which may not be obvious but are very important.
    1. Provide a Container name.  This can be whatever you want but must be unique within your “space”.
    2. Select the Size of the container.  The Knowledge Center recommends at least the “Small” configuration.
    3. The Public IP address field defaults to Leave Unassigned.  You definitely don’t want that.  The container will need a public IP so that the iOS app can connect to it.  If a public IP has already been allocated to the account, select it here.  If not, select Request and Bind Public IP.
    4. Finally, assign the Public ports that will be opened.  It is required that ports 80, 9080 and 443 be opened.  This field is a little confusing.  The port numbers must be entered separated by commas.  The placeholder example shows them also separated by spaces but the field actually does not allow spaces.  Enter 80,9080,443.

    GettingStarted Container creation page

    GettingStarted Container creation page

  5. Click Create to deploy the container.  After only a few seconds, “Your container is running” will appear on the container’s overview page.  Note the public IP address that was bound to the container.  This is how users and apps will get access to it.

    Container Overview Page

    Container Overview Page

Register with the Container

The admin user that will be used to access the container requires a password.  This is the opportunity to define it.  Please be more creative with your password than “admin”.  Anyone that knows the public IP address of the container will be able to get to this page and gain access to the Operations Console if they guess the password.

  1. Open a browser tab to the Public IP address. (i.e. http://134168.6.227, in my case).  The first time this URL is opened, the container registration page will be shown.  Enter a password here for the admin username.
  2. Click Register.

    Container Registration Screen

    Container Registration Screen

  3. After a few seconds, the MobileFirst Operations Console will appear.  There is one existing runtime, MobileFirstStarter, which contains three applications and three adapters.  These support the WishList sample application.

Accessing sample code, Operations Console and Analytics

The next time the container’s Public IP address is opened, a landing page will be shown that enables you to download the WishList sample application code, open the Operations Console to manage your applications, open the Analytics Console to view analytics and data about your apps and download server logs for troubleshooting.

Container Landing Page

Container Landing Page

Update the MobileFirst project

Some modifications to the settings in the MobileFirst project containing the application and adapters are necessary to prepare the artifacts for deployment to the new runtime.  I have a MobileFirst project called DoorbellIOSNative that contains the CloudantAdapter adapter and an AIOiOS SDK app.  In my case, the adapter is completely portable and will require no changes to run it on a new rutime.  I will, however, need to modify the SDK app’s worklight.plist file to point to my new host/port and the name of the new runtime.

  1. Open the DoorbellIOSNative/apps/APIiOS/worklight.plist file in an editor.
    1. Replace the host value with the Public IP of the container.
    2. Replace the port value with 9080.
    3. Replace the wlServeContext value with “/MobileFirstStarter/”
  2. Rebuild the project and prepare it for deployment.  From the MFP command line tool, use the command ‘mfp push‘.  From MFP Studio in Eclipse, Run As > Run on MobileFirst Development Server.

Deploy the adapter and application to the container

Now the application and adapter need to be deployed to the MobileFirst runtime running in the container on Bluemix.

  1. Open the MobileFirst Operations Console (http://<your-public-ip&gt;:9080/worklightconsole).
  2. Click Add new app or adapter.  Select the CloudantAdapter.adapter file in the bin directory of your MobileFirst project.
  3. Repeat the process for the APIiOS-iOSnative-1.0.wlapp application file.

Update the native iOS application

The application itself needs to be updated so that it will look in the right place for the MFP server.

  1. Open the Xcode project.
  2. Update the references
    1. If the worklight.plist and WorklightAPI artifacts were added to the project by NOT checking “Copy items if needed”, then there is nothing else to do before rebuilding.  The artifacts in Xcode are links to the actual artifacts that were already updated.
    2. If copies of these artifacts were made, delete them now.  Drag-n-drop the new worklight.plist and WorklightAPI artifacts onto the Xcode Project Navigator to add them.
  3. Build and run the app on a device.  The application is now hosted on an MFP server in the cloud!

Again to be clear, the Getting Started Image is not intended for production workloads.  I’ll take a look at building a custom container that can be used for production in a future post.  But the Getting Started Image did enable me to deploy my existing app to a functional MFP server running in the cloud on IBM Bluemix in about 10 or 15 minutes.  When you compare that with the thought of standing up a server, installing MFP Server, creating a runtime, deploying the adapter, deploying the application and configuring the client app, it’s pretty compelling.

My Internet of Things and MobileFirst adventure – Part 8: Build the mobile app

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Finally, we come to creating the mobile app that will tie this all together.  This is going to involve a lot of steps so I will cover the basic outline of what I did without getting into every line of code.  You can download all the source files for this project (including the Node-RED flows) from IBM Bluemix DevOps Services if you want to see the details.

But at a high level, here’s what needs to be done:

  1. Add the IBM Mobile Push service to the Bluemix application.  This will manage the push notifications with APNS (Apple Push Notification Service).
  2. Add nodes to the Bluemix Node-RED flow to initiate a push request when it receives the visitorAlert event.  This is how Bluemix will tell the mobile app that the doorbell was pressed.
  3. Create a MobileFirst project.  This will house an adapter that will retrieve the file from Cloudant as well as generate the SDK library the iOS app will use.
  4. Create a new Xcode project to house the native iOS app
  5. Add the MobileFirst SDK to the Xcode project.
  6. Add the Bluemix SDK to the project.
  7. Write code

Add the IBM Mobile Push service to the Bluemix application

  1. Open the Bluemix application.
  2. Click ENABLE APP FOR MOBILE and then YES and then CONTINUE.
  3. Click ADD A SERVICE OR API  and select Push from the Mobile category, then click USE and then RESTAGE.  I should point out that there is a Push iOS 8 service as well.  I chose not to use it and stuck with the older Push service.  Part of the reason was that Push iOS 8 depends on the newer Advanced Mobile Access and I already knew using it would require customizing my Bluemix Node-RED instance to support it.  I took the path of least resistance, but at some point, I will need to adopt AMA to keep up with the times.
  4. At the top of the application’s overview page, you will now see the app’s registration credentials.  You will need these later to tell the mobile app how to connect to Bluemix.

Configuring Apple Push is not for the faint of heart.  First of all, it requires you to have an Apple Developer license.  You can buy a personal license for $99 per year if you don’t have access to a corporate license.  Once you have a license, you must use the Apple Developer Portal to create a Device ID, an App ID and a Provisioning Profile as well as an SSL certificate that must be used by any application that wants to request APNS to send a push notification.  That process is out of scope for this blog, but you can read about it in the iOS Developer Library.

Once you have your SSL key, you will need to register it with the IBM Bluemix Push Service.

  1. Go to your Bluemix application and click the Push service.
  2. Select the Service Mode Sandbox or Production, depending on if you have a Development or Production SSL key from Apple.

    Bluemix Push Dashboard

    Bluemix Push Dashboard

  3. Click EDIT under APNS.
  4. Upload your p12 SSL key and enter the password for it.

It is possible to use the IBM Push dashboard to send out test notifications.  This can be helpful in debugging the process.

Bluemix Push Notification tab

Bluemix Push Notification tab

Add nodes to Bluemix Node-RED to initiate Push notifications

Node-RED on Bluemix includes nodes for IBM Push.  However, I was not able to get them to work as expected.  So I chose to go about this through brute force with HTTP nodes.  This has the advantage of demonstrating more of the details of how the Bluemix Push SDKs work, but I should really revisit this someday.

  1. Open your Bluemix Node-Red flow editor.
  2. Add a function node.  Its input should be connected to the output of the visitor Alert Event IBM IoT In node.  Name it send notification.
    1. Add the following code.  Yes, I am cheating.  the picture filename is not a URL, but I got lazy and decided to use the URL element to pass it.
      var newmsg = {};
      newmsg.headers = { 
          "IBM-Application-Secret" : "<your-secret>" };
      newmsg.payload = {
          "message" : {
              "alert"    : "Someone is at the door!  " +
                     "Would you like to see who it is?",
              "url"      : msg.payload.pictureFilename
          },
          "settings" : {
              "apns"   : {
                  "sound" : "doorbell/Doorbell.mp3"
              }
          },
          "target" : {
              "platforms" : ['A']
              }
      };
        
      return newmsg;
      
    2. Replace <your-secret> with the value of your application secret code from the application’s overview page.
  3. Add an http request function node (note – this is not an http input nor http output node).  Its input should be connected to the output of the send notification function node.
    1. Set its method to POST.
    2. Set the URL to https://mobile.ng.bluemix.net:443/push/v1/apps/<your-appId>/messages where <your-appId> is the application appId found on the overview page.
    3. Set the return type to UTF-8 String.
  4. Add debug nodes if you want.

    Final Bluemix Node-RED flow

    Final Bluemix Node-RED flow

Create a MobileFirst project

I’m going to leverage the IBM MobileFirst Platform Foundation for this mobile application.  This is, of course, not necessary to use Bluemix services.  You can access Bluemix services directly from an iOS native app.  I’m not going to demonstrate them in this tutorial, but the MobileFirst Foundation provides a long list of capabilities that make developing applications easier and more secure.  What it does mean is that there will be a server-side component called an adapter that will do the actual Cloudant communication.  The iOS app will use the MobileFirst SDK to invoke routines on the adapter.

To create the MobileFirst project, you need to have either the MobileFirst Studio extension to Eclipse or the MobileFirst command line interface installed.  I’m going to be using the command line interface here.

  1. Create a new project from the command line with ‘mfp create DoorbellIOSNative‘.
  2. cd DoorbellIOSNative
  3. Start the mfp Liberty server with ‘mfp start‘.
  4. Create an adapter that will retrieve the Cloudant data with ‘mfp add adapter CloudantAdapter --type http‘.
    1. Implement the adapter logic.  The simplest way to do that is to copy the CloudantAdapter.xml and CloudantAdapter-impl.js files from this MobileFirst tutorial.
    2. Be sure to edit the domain, username, and password values in the xml file so they reflect the values from your Bluemix overview page.
  5. Add a new iOS native API using $ mfp add api APIiOS -e ios.  There are two configuration files you need to edit in the project’s /apps/APIiOS folder.  Use this MobileFirst tutorial as a guide:
    1. The worklight.plist file contains several settings that your iOS app will need to know in order to connect to the MobileFirst server which hosts the adapter.
    2. The application-descriptor.xml file defines the application name, bundleID and version for the MobileFirst server.
  6. Once you have finished configuring these files, deploy the project to the MobileFirst server with ‘mfp push‘.

Create an Xcode project

The Xcode project will contain your iOS native app source code.  Create a blank project and create an app with a UIImageView to contain the picture.  If you are not that comfortable coding, you can download my project from IBM Bluemix DevOps Services.  Developing the app UI is not really the focus here, but rather showing how you configure the project to access the MobileFirst adapter and IBM Push from Bluemix.

Add the MobileFirst SDK to the Xcode project

When you created the MobileFirst project above, two things were created that you will need to add to your Xcode project – the worklight.plist file and the WorklightAPI folder.  You also need to add several frameworks to the Xcode project to get access to the libraries you will need.  The IBM MobileFirst Platform Foundation Knowledge Center does a good job of explaining how to do this.

Add the Bluemix SDK to the project

The SDKs you need for IBM Push are included with a bundle of SDKs for Bluemix.  You can download them from the IBM Bluemix Docs.  The only tricky part here is that these libraries are Objective-C libraries.  Since I coded my iOS app in Swift, I had to create an Objective-C Bridging Header.  This is a really easy way to expose Objective-C code to Swift applications.

Write code

There are multiple places in the AppDelegate and ViewController where I needed to add custom code.  The two main functions I needed to deal with were configuring the app to receive and handle push notifications and then to use the MobileFirst adapter to retrieve the image.

Push notification configuration

didFinishLaunchingWithOptions

This is sort of the main routine for an iOS application.  It is invoked when the app launches.  Here is where you need to have the app register for push notifications with the following code:

        // ========================================
        //  Register for push notifications
        // ========================================
        // Check to see if this is an iOS 8 device.
        let iOS8 = floor(NSFoundationVersionNumber) > floor(NSFoundationVersionNumber_iOS_7_1)
        if iOS8 {
            // Register for push in iOS 8
            let settings = UIUserNotificationSettings(forTypes: 
                UIUserNotificationType.Alert | 
                UIUserNotificationType.Badge | 
                UIUserNotificationType.Sound, categories: nil)
            UIApplication.sharedApplication().registerUserNotificationSettings(settings)
            UIApplication.sharedApplication().registerForRemoteNotifications()
        } else {
            // Register for push in iOS 7
            UIApplication.sharedApplication().registerForRemoteNotificationTypes(
                UIRemoteNotificationType.Badge | 
                UIRemoteNotificationType.Sound | 
                UIRemoteNotificationType.Alert)
        }

Notice that the way you register for push changed with iOS 8 so the code here first determines the OS level, then does what it needs to do.

didRegisterForRemoteNotificationsWithDeviceToken

This method is invoked by the framework when the app has successfully registered with APNS.  Here is where you want to initialize the Bluemix and Push SDKs.  You would replace the placeholders with the route, appId, and appSecret from your Bluemix application.

        // Initialize the connection to Bluemix services
        IBMBluemix.initializeWithApplicationId(
            "<your-app-id>",
            andApplicationSecret: "<your-app-secret>",
            andApplicationRoute: "<your-app-route>")
        
        pushService = IBMPush.initializeService()
        if (pushService != nil)  {
            var push = pushService!
            push.registerDevice("testalias",
                withConsumerId: "testconsumerId",
                withDeviceToken: self.myToken).continueWithBlock{task in
            
                if(task.error() != nil) {
                    println("IBM Push Registration Failure...")
                    println(task.error().description)
                    
                } else {
                    println("IBM Push Registration Success...")
                }
                return nil
            }
        } else {
            println("Push service is nil")
        }

didReceiveRemoteNotification

This method is invoked by the framework whenever a push notification is received.  The payload of the push notification is passed in in the userInfo object.  In a series of ‘if let’ statements, I extract the bits of information I need.  Then I determine if the app was in the background or inactive.  If so, that means the user already saw the notice, read it and chose to invoke it by tapping on it.  In that case, go straight to the code that gets the picture from Cloudant and loads it into the UIImageView.  If the app was in the foreground, create a message alert and only load the image if the user taps OK.

        if let aps = userInfo["aps"] as? NSDictionary {
            if let alert = aps["alert"] as? NSDictionary {
                if let fileName = userInfo["URL"] as? NSString {
                    if let message = alert["body"] as? NSString {
                        if let sound = aps["sound"] as? NSString {
                            if (application.applicationState == UIApplicationState.Inactive ||
                                application.applicationState == UIApplicationState.Background) {
                                    println("I was asleep!")
                                    self.getPicture(fileName)
                                    
                            } else {
                                var noticeAlert = UIAlertController(
                                    title: "Doorbell",
                                    message: message as String,
                                    preferredStyle: UIAlertControllerStyle.Alert)
                                
                                noticeAlert.addAction(UIAlertAction(
                                    title: "Ok",
                                    style: .Default,
                                    handler: { (action: UIAlertAction!) in
                                        println("User wants to see who's at the door")
                                        println(fileName)
                                        self.getPicture(fileName)
                                }))
                                noticeAlert.addAction(UIAlertAction(
                                    title: "Cancel",
                                    style: .Default,
                                    handler: { (action: UIAlertAction!) in
                                        println("Handle Cancel Logic here")
                                }))
                                
                                let fileURL:NSURL = NSBundle.mainBundle()
                                    .URLForResource(
                                        "Doorbell", 
                                        withExtension: "mp3")!
                                
                                var error: NSError?
                                self.avPlayer = AVAudioPlayer(
                                    contentsOfURL: fileURL, 
                                    error: &error)
                                if avPlayer == nil {
                                    if let e = error {
                                        println(e.localizedDescription)
                                    }
                                }
                                
                                self.avPlayer?.play()
                                
                                // Display the dialog
                                self.window?.rootViewController?
                                    .presentViewController(
                                        noticeAlert, 
                                        animated: true, 
                                        completion: nil)   
                            }   
                        }
                    }
                }
            }
        }

Retrieve image from Cloudant

didFinishLaunchingWithOptions

When the app first starts up, connect to the MobileFirst Platform Server.  I’m leaving out some details here, but basically you need to call wlConnectWithDelegate.  That method takes a listener parameter, but that listener is really trivial in my case.

        let connectListener = MyConnectListener()
        WLClient.sharedInstance().wlConnectWithDelegate(connectListener)

getpicture

This routine, called when a user chooses to see the picture through the push notifications, will use the MobileFirst adapter procedures to search, then retrieve the image from Cloudant.

        // Search Cloudant for the document with the given fileName
        let searchRequest = WLResourceRequest(
            URL: NSURL(string: "/adapters/CloudantAdapter/search"), 
            method: WLHttpMethodGet)
        let queryValue = "['pictures','ddoc','pictures',10,true,'fileName:\"" + 
            (fileName as String) + "\"']"
        searchRequest.setQueryParameterValue(
            queryValue,
            forName: "params")
        searchRequest.sendWithCompletionHandler { 
           (WLResponse response, NSError error) -> Void in
            if(error != nil){
                println("Invocation failure. ")
                println(error.description)
            }
            else if(response != nil){
                let jsonResponse = response.responseJSON
                if let rows = jsonResponse["rows"] as AnyObject? as? NSArray {
                    if (rows.count > 0) {
                        if let row = rows[0] as AnyObject? as? Dictionary<String,AnyObject> {
                            if let doc = row["doc"] as AnyObject? as? Dictionary<String,AnyObject> {
                                if let payload = doc["payload"] as AnyObject? as? Dictionary<String,AnyObject> {
                                    var base64String : String = payload["value"] as! String
                                    
                                    // Strip off prefix since UIImage doesn't seem to want it
                                    let prefixIndex = base64String.rangeOfString("base64,")?.endIndex
                                    base64String = base64String.substringWithRange(
                                        Range<String.Index>(
                                            start: prefixIndex!, 
                                            end: base64String.endIndex))
                                    // Strip out any newline (\n) characters
                                    base64String = base64String.stringByReplacingOccurrencesOfString(
                                        "\n", 
                                        withString: "")
                                    
                                    // Convert to NSData
                                    let imageData = NSData(
                                        base64EncodedString: base64String, 
                                        options: NSDataBase64DecodingOptions.IgnoreUnknownCharacters)
                                    
                                    // Create an image from the data
                                    let image = UIImage(data: imageData!)
                                    
                                    // Stuff it into the imageView of the ViewController
                                    AppDelegate.vc!.updatePicture(image!)
                                    
                                }
                            }
                        }
                    }
                }
            }
        }

Testing

With all this in place, I build the app in Xcode and deploy it to my iPhone (remember, it must be a physical device because APNS doesn’t work with a simulator).  My app screen looks like this:

Doorbell App - Initial Screen

Doorbell App – Initial Screen

I walk up to my door and press the doorbell button.  The Node-RED flow on the Raspberry Pi takes a picture, stores it on the local file system and sends an MQTT message to the IBM Internet of Things Foundation broker.  The Node-RED flow running on Bluemix in the cloud receives the message and tells the Raspberry Pi to upload the picture to the Bluemix Cloudant database.  At the same time, Node-RED uses the IBM Push service, also running on Bluemix, to send a remote notification to the app on my phone.  The app receives the push notification and presents an alert to me.

Doorbell App - Push Notification

Doorbell App – Push Notification

I tap Ok to see who is at the door.  The iOS app uses the IBM MobileFirst SDK to invoke adapter routines to first search for, then retrieve, the picture from the Bluemix Cloudant database.  It then pops that picture onto my mobile screen so I can see who is ringing my doorbell, even if I am hundreds of miles from home and can’t do a thing about it.

Doorbell App - Someone at the Door

Doorbell App – Someone at the Door

Conclusion

Well, that about does it.  This has been quite an adventure.  Clearly this isn’t an app you would put into production.  I have over-engineered several areas and it is far more complicated than a doorbell needs to be.  But we did get a chance to explore several technologies including:

  • Setting up a Raspberry Pi with an external button input and a camera
  • Node-RED visual editor for controlling and interacting with the Internet of Things
  • Bluemix applications and services including Cloudant Databases, Node.js runtimes and mobile Push.
  • Registering and app with Apple Push Notification Service.
  • IBM MobileFirst Platform Foundation for enterprise security and access to Systems of Record such as Cloudant databases
  • Using Xcode to create a native iOS application that receives push notifications from IBM Bluemix Push and retrieves data from Bluemix Cloudant datastores.

What’s next?  Well, IBM just released MobileFirst Platform Foundation version 7.1 which supports deploying the MobileFirst server in a container in Bluemix – built on Docker technology.  Maybe I will see if I can eliminate my local laptop entirely and have everything except the Raspberry Pi in the cloud!

My Internet of Things and MobileFirst adventure – Part 7a: Further elaboration…

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Well, I have been giving some more thought to the decisions I made in my last post and I think I need to reconsider how I am going to transfer the picture from the Raspberry Pi to the mobile app.  After I discovered the payload limits on the IBM IoT Foundation broker, I decided to push on with MQTT by using the Mosquitto test server.  Now, I know this could work.  I could have the mobile app use something like the Eclipse Paho project Javascript libraries to connect and subscribe directly to the broker topic to receive the picture.  But the more I think about it, the more convoluted that sounds. OK, yes, this whole project is a bit contrived – there is no real reason to involve all these moving parts to build a picture-taking doorbell app, but using a second MQTT broker just seems really unnatural.  If I have to switch away from IBM IoT Foundation to transfer the picture, I just as well give myself the opportunity to involve something completely new and different, right?

That’s when I thought of Cloudant.  IBM Cloudant is a NoSQL database as a service (DBaaS) that also runs in Bluemix.  There are Cloudant nodes for Node-RED that I could run directly on the Raspberry Pi (I wouldn’t even need to send the picture file to Node-RED on Bluemix).  I could use the Cloudant SDK’s to read the picture from a mobile app directly or from a MobileFirst adapter.  With this new concept in mind, I reworked my architecture diagram a bit.

Revised Architecture

Revised Architecture

The red line is the new connection.  Node-RED on Bluemix actually uses a Cloudant database to store flows, etc., but I didn’t think that was all that relevant, so I left that line off the diagram.

To reverse my Mosquitto decision and move to the Cloudant approach, I need to do three things:

  1. Create a Cloudant database on Bluemix to store the picture.
  2. Modify the Raspberry Pi Node-RED flow to send the picture to Cloudant instead of Mosquitto.
  3. Cleanup the Bluemix Node-RED flow.

Create a Cloudant database

I could create a whole new Cloudant database service on Bluemix to store the pictures coming from Raspberry Pi, but the Bluemix application I created from the Node-RED Starter boilerplate back in Part 6 already has a Cloudant service bound to it so I decided to just create a new database within that service.

  1. From the Bluemix Dashboard, click your Node-RED application.
  2. Note the Cloudant NoSQL DB service tile.  If you click show credentials on that tile, you will see the creds you will need later on to enable the Raspberry Pi (and other clients) to connect to the database.
  3. Click on the Cloudant NoSQL DB service tile, then click Launch.  This opens up the Cloudant database dashboard.  There is already one database running in this service – “nodered”.  This is the database that Node-RED uses to store its artifacts so let’s not muck with that.
  4. Click Add new database and name it pictures.

Later, we are going to need to be able to search the Cloudant documents by the “fileName” attribute.  Searching in Cloudant means you need to create a design document which defines the index by which you want to search.

  1. Click the + next to All Design Docs and choose New Search Index.  Save to design document New Document.  Index name pictures.  Add the following code as the Search index function:
    function(doc){
       if(doc.payload.fileName){
           index("fileName", 
               doc.payload.fileName, {"store": "yes"});
       }
    }
  2. Leave the other settings at their defaults and click Save & Build index.

Modify Node-RED flow on Raspberry Pi to use Cloudant

The Raspberry Pi Node-RED doesn’t include nodes for Cloudant out of the box, but we can add node-red-node-cf-cloudant similar to the way we added node-red-contrib-gpio at the end of Part 3.

  1. Open an ssh terminal session to the Raspberry Pi.
  2. Issue the commands
    cd ~/.node-red
    npm install node-red-node-cf-cloudant
  3. Stop and restart node-RED.  Now you will see Cloudant storage nodes in the palette.

    Cloudant nodes in the palette

    Cloudant nodes in the palette

  4. Remove the mqtt Picture queue node from the Send Picture flow.
  5. Drag a cloudant out storage node onto the flow.  Its input should be the output of the createJSON node.  Configure the cloudant out node.
    1. Choose External service
    2. Configure a new Server.
      1. Host, Username and Password are the connection credentials from the Bluemix service I said you would need.
      2. Name is just a name you give to the server configuration so you can reference it again if needed.
    3. Database should be set to pictures
    4. Leave Operation as insert
    5. Name the node insertPicture.
  6. Deploy the flow.  You will get a warning that your test.mosquitto.org configuration node is unused.  Node-RED keeps connection configurations like this around just in case you want to reuse them.  Click Confirm deploy for now.  If you want to remove these configurations, you can select configuration nodes from the hamburger menu on the right.
  7. Test it by pressing the doorbell button.  The Process Doorbell flow should send an IBM IoT visitorAlert event message to Node-RED on Bluemix which should, in turn, send an IBM IoT sendPicture command message back to Node-RED on Raspberry Pi.  This will eventually create a Base64 encoded image string, package it up with the filename, length and status into a payload that gets sent to Cloudant in Bluemix by the insertPicture node.  See if it worked by going to the Bluemix Cloudant dashboard.  You should see one Document in the pictures database that looks something like this.

Cloudant Document

Clean up the Bluemix Node-RED flow

Now let’s remove the MQTT node that receives the picture from the Bluemix Node-RED flow.

  1. Open Node-RED on Bluemix.
  2. Select and delete the MQTT Picture queue node and the debug node attached to it.
  3. Deploy the flow.

Ok, we are finally ready to create a mobile app and wire it up to Bluemix Push and Cloudant to complete the circle.  That’s the next post.

Part 6: Enabling Push Notifications

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

When we left of in the last post, I had a Raspberry Pi that is capable of taking a picture using its camera and storing that picture in a file.  It can also sense that a physical pushbutton has been pressed – a doorbell.  I also have my Bluemix services configured to manage Push notification requests and to broker IoT messages.  Let’s start wiring things together by configuring Push notifications.

Build an iOS app that can receive push notifications

Note: This app evolved and changed a lot as I kept changing my mind on how I wanted to do things.  Again, you may not see why I decided to implement things the way I did until later.  Just play along for now.

I’m writing a Native iOS app in Swift.  I’ll share the entire app in GitHub at the end of this series, but for now I will just show you code snippets of the relevant parts.

To receive remote push notifications, mobile apps must register to receive them. There are really two parts to this, registering with the Apple Push Notification Service to enable the app to receive notifications and registering with IBM Push Notifications service to enable it to send the requests on behalf of the Raspberry Pi app.

Register for remote notifications with APNS

The following code defines two notification actions and registers them with APNS to enable the app to display alert boxes with these buttons, play sounds and update badge counts in response to remote notifications:

You call this procedure from the AppDelegate’s didFinishLaunchingWithOptions method.

Register the device token with IBM Push Notifications service

You know registration with APNS was successful when the system calls your AppDelegate’s didRegisterForRemoteNotificationsWithDeviceToken method, so here’s where you register with IBM Push Notification service, passing along the device token.

Again, realize that this is not complete code, but rather just the important parts.

Invoke a Push notification from the Raspberry Pi

I’m going to change up the Node-RED flows that I created in Part 4.  The flow I had created caused the Pi to capture a picture as soon as the doorbell was pressed.  I am going to have Node-RED only send a push notification to the app when the doorbell is pressed.  The picture won’t actually be taken until the user requests it in response to receiving the Push notification on the iPhone app.  No, I don’t think this is the best design either, but again, I am using this as an opportunity to learn as much as I can about IoT, so this design will enable me to eventually send MQTT commands from the app to the Pi.

There is a Node-RED node package that sends push notification requests to the IBM Push Notifications service.  But what I found is that this node only accepts a simple string to use as the message alert string.  The IBM Push Notifications service can optionally accept a JSON object that defines a number of other items.  I would like to provide some of these other items in my Push message so I am going old school and using an HTTP node instead.  This additionally illustrates how the IBM Push Notifications service can be invoked with a simple HTTP POST.

Raspberry Pi Node-RED flow

I modified my Node-RED flow to the following:

Node-RED flow for processing the doorbell

Node-RED flow for processing the doorbell

The “send notification” and “push” nodes are new and require a little explanation.

HTTP Request node

I did not customize this node at all, with the exception of naming it.  The address, headers and content of the message will be provided in the incoming message from the function node.

Send Notification function node

  • Line #3:  The pushbutton will send a message (“1”) when pressed and a message (“0”) when released.  Also, in order to simulate a button press for testing, I added a timestamp Inject node as an input to the node as well.  I only want the node to be triggered when the payload is NOT 0 – meaning either the pushbutton was pressed (not released) or the message is a timestamp.
  • Line #5:  This request will use the HTTP POST verb
  • Line #6:  The URL address of the request.  You will need to replace “your-appID-here” with the AppGUID from your Bluemix application
  • Line #7:  The appSecret HTTP header value needs to be the appSecret from your Bluemix IBM Push Notifications service.
  • Line #8 – #22:  This defines the payload message that will be sent to the IBM Push Notifications service.  For a full explanation, click Model to the right of body at https://mobile.ng.bluemix.net/imfpushrestapidocs/#!/messages/post_apps_applicationId_messages.

Testing

Once Deployed, clicking on the timestamp node should cause the function node to create a message and the push node to send it.  You can confirm by verifying the output of the msg node in the debug view.  You should see a statusCode of 202 and a header x-backside-transport value of “OK OK”.

Assuming you have added the code above for registering for remote notifications and registering the device token with IMFPush to even a simple iOS app, you should also see the notification appear on your device.

Diagnosing problems with Push notifications is difficult.  You either get the notification on the mobile device or you don’t.  There are no diagnostic messages returned from APNS or log files created.  The most likely cause of a push not finding its way to your phone is that the SSL certificate wasn’t created or installed properly.

Next

The next thing is to set up the basics required for the app to request a picture and the Pi to respond by sending it to the app via IoT Foundation.

That will be the next blog post in this series.

Part 4: Adding in the camera

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Now I have the Raspberry Pi running Node-RED and can control a pushbutton and LED.  Cool.  And then, the mail carrier came!  My camera module arrived!

Camera module installation is pretty easy.  Just follow the Raspberry Pi Documentation.  There is an easy command-line utility called raspistill that I used to test out the camera.

Now there is no out-of-the-box node to control the camera module from Node-RED.  I never found any Googling around the internet either.  But that’s OK – this gave me an opportunity to see how extensible Node-RED is.  I would have to integrate the Raspicam Node.js library into function nodes.

Making Raspicam accessible in Node-RED

First, I had to install the raspicam module into Node.js.  Npm package manager makes that easy enough, but again, make sure you are in the right directory.

cd ~/.node-red
npm install raspicam

Now, that makes raspicam accessible in Node.js, but there is an additional step required to make it accessible within Node-RED.  See the Global Context section in the Node-RED Documentation.  In ~/.node-red/settings.js, I added

functionGlobalContext: {
        RaspiCam:require('raspicam')
},

Bounce Node-RED and you should be able to access raspicam commands inside Node-RED modules like

 var camera = new context.global.RaspiCam( opts );

Adding a function node

Now I’m ready to add some code to control the camera.  This is done with a function node.

  1. Drag a function node onto the canvas and connect it like this
    Function Node added to test flow

    Function Node added to test flow

    A couple things to note here.  First, you can have more than one node wired to a node’s input.  In this case, both the Pin 12 node and the function node will send information to the msg.payload debug node.  We already saw this in the previous post because the Pin 12 node and timestamp injection node send output to the trigger node.  Secondly, you can have more that one node take input from a node’s output.  In this case, the trigger node will pulse the LED, but it will also send the trigger to the function node, which we will use to take a picture.

  2. Double-click the function node.  Name it Take Picture.  Paste the following code into the Function area:
    // We only want to take pictures when 
    // the button is pressed, not when it is released. 
    if (msg.payload == 1) { 
        var encoding = "png"; 
        var currTime = new Date().getTime();
    
        // Use the current timestamp to ensure
        // the picture filename is unique.
        var pictureFilename = "/home/pi/pictures/" + currTime + "." + encoding;
        var opts = {
            mode: "photo",
            encoding: encoding,
            quality: 10,
            width: 250,
            height: 250,
            output: pictureFilename,
            timeout: 1};
    
        // Use the global RaspiCam to create a camera object.
        var camera = new context.global.RaspiCam( opts ); 
    
        // Take a picture
        var process_id = camera.start( opts ); 
    
        // Send the file name to the next node as a payload.
        return {payload: JSON.stringify(
            {pictureFilename : pictureFilename}) };
    }
  3. Deploy the flow.  You see two messages in the debug view. The first is the message from the Pin 12 node.  The second is the message created by the “Take Picture” node.  That node has told the camera to take a picture (if you were paying attention, you would have seen the red light on the camera module flash) and has sent a message with the filename to the debug node.
    Deploy messages

    Deploy messages

    There is a problem here.  The Pin 12 node is configured with “Read initial state of pin on deploy/restart?” checked.  This is causing my flow to trigger and a picture to be taken when I don’t want it to.  I’ll fix that before verifying the camera worked.

  4. I’ll double-click the Pin 12 node and deselect “Read initial state of pin on deploy/restart?” and Deploy again.  This time I get no new debug messages.
  5. Press the pushbutton on the breadboard.  The camera light will blink as it takes a picture and the LED connected to Pin 22 will pulse for 2 seconds.
  6. I’ll open up a VNC session, find the picture file in /home/pi/pictures with the name in the debug message and open it.  The picture isn’t all that exciting.  My Pi camera is sitting on my desk, pointing toward my laptop monitor.  But it proves that things are working.

    Viewing picture on Pi VNC

    Viewing picture on Pi VNC

Ok, so now I have a Raspberry Pi and a Node-RED flow triggered by an external button that will take a picture using the Pi camera module and store the picture in a file on the device.  I’m making progress but so far I only really have a “thing”, not an “internet of things”.  In the next post, I’ll set up my Node-RED environment on Bluemix so I have something to communicate with.

Part 3: My first Node-RED flow on the Pi

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

In previous posts, I have setup the hardware and software on my Raspberry Pi.  Now it is time to create a Node-RED flow to manage my hardware.

Node-RED flow editor

Node-RED’s flow editor is browser based.  Once you have Node.js running with the Node-RED package installed, browse to your flow editor at http://your.pi.address:1880.  The Node-RED flow editor is quite simple to use.  On the left is a palette of nodes that can be added to your flow.  In the center is the canvas where you will create your flow.  On the right you see two tabs.  The Info tab will display documentation for any selected node.  The Debug tab will become really important shortly as we start putting together a flow and want to see what is going on.

Node-RED Startup Screen

Node-RED Startup Screen

Button flow

The first flow I created was a simple one to verify my Node-RED environment was seeing the button state changes.

  1. I dragged the rpi-gpio in node to the canvas.  This node isn’t in the out-of-the-box Node-RED but was added by the node-red-contrib-gpio node I added.  Note all the information about the node displayed in the Info tab on the right.
  2. Double-click the node to open its configuration dialog.  Set the values as below and close it

    Input pin configuration

    Input pin configuration

  3. Drag the debug output node to the canvas and connect them as below.
    Input flow

    Input flow

    The blue dots on the nodes indicate that these changes have not yet been deployed.

  4. Select the debug tab on the right side of the screen.
  5. Click the red Deploy button in the upper right.  You should see a ‘0’ payload displayed in the debug window.  The input node was configured to read the initial state of the pin when the flow was deployed.  That caused a message to be sent from the input pin node to the debug node which displayed the payload in the debug window.
  6. Press and release the button on the breadboard.  You should see a transition to a ‘1’, and then a transition back to ‘0’.

    Button press debug messages

    Button press debug messages

 LED flow

Next I wanted to create a simple flow that would enable me to manually control the LED state from within Node-RED.

  1. Drag the Inject node onto the canvas.  The inject node enables you to manually interact with the flow.  By default, it simply sends the current timestamp as a payload in the outgoing message.
  2. Drag the Trigger node onto the canvas.  Change the node’s settings to send a 1 then wait for 2 seconds before sending the 0.
  3. Finally, drag an rpi-gpio out node to the canvas and configure it for pin 22.
  4. Connect the nodes together like this:

    Output flow

    Output flow

  5. Deploy the flow.
  6. Click the little tab on the left side of the Inject node.  Your LED should light for 2 seconds, then go off.

Combined flow

Now, let’s combine them into a single flow.

  1. Connect the output of the Pin 12 node to the input of the trigger node.
  2. Deploy.
Combined test flow

Combined test flow

Now the LED can be triggered by either clicking on the timestamp node or by pressing the pushbutton connected to pin 12.

So now I have a Raspberry Pi with a pushbutton and LED that I’m controlling with Node-RED.  In the next post, I will install the camera and see what I need to do to control it with a Node-RED flow.

Part 1: Setting up the Raspberry Pi hardware

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

A couple days later, my vigil at the mailbox was rewarded – my Raspberry Pi starter kit arrived.  Since I was too cheap to pay for express shipping, the camera wouldn’t arrive for a couple more days.  That was fine with me – I had plenty to learn about the Pi before dealing with the camera.

Unpacking and firing up the Raspberry Pi

I pulled out a spare USB keyboard, mouse and network cable and plugged the Pi into an HDMI monitor.  My starter kit came with a micro-SD card pre-loaded with the NOOBS (New Out Of the Box Software) so I inserted that.  I plugged in the device and away it went.  Initializing the Raspberry Pi was extremely easy.  I went with the recommended Raspian OS and had the GUI running in no time.

Wifi

My starter kit came with a wifi USB dongle.  I plugged that in and configured the wifi settings through the GUI.

Wifi Configuration in the Raspberry Pi GUI

Wifi Configuration

With the Pi on my home wifi network, I could shell into it from my Macbook.  That’s handy because having an extra keyboard and mouse cluttering up my already cluttered desk was going to be a pain.  Now I could talk to the device without needing an external monitor or peripherals.

Circuits

Being an old hardware guy, I went straight for the breadboard and circuit components that came with my starter kit.  I set up a push button with a 10k pull-down resistor that I could use as an input and a LED with a 2.2k series resister I could use as an output for testing.  My kit even came with a GPIO to Breadboard Interface Board which made it easy to connect the Pi to the breadboard.  I connected my pushbutton circuit to BCM_GPIO 18 on physical pin 12 and my LED circuit to BCM GPIO 25 on physical pin 22.  Here is a view of the circuit:

Test circuits on breadboard

Test circuits

Ok, now I have all the hardware configured.  I just need to figure out how to control it with software.  That’s a task for the next post.