Creating a Fan-out menu in a Hybrid mobile app

The other day, I set out to create a fan-out menu in an Ionic app.  You know the type – you tap on a button such as Share and options for things like Facebook, Twitter, etc. appear in a fan of icons.  Tapping on one of the icons would then post your message to the selected social media site.

fan-out menu

Per my usual modus operandi, I started Googling, hoping to find a nice tutorial or example I could “repurpose”, but surprisingly, I didn’t find much.  So this post will document the solution I came up with.  The completed code can be found at https://github.com/dschultz-mo/Ionic-fan-out-menu-demo.

Create a starter app

Let’s start by creating a blank Ionic app.  If you are new to Ionic, it is really simple to get started.  Follow the brief Getting Started instructions to install Node and Ionic.

Now create a project based on the tabs template:

$ ionic start demoApp tabs

For now, you can just say ‘no’ when asked if you want to create an ionic.io account, but I would highly recommend you revisit this and dig into Ionic a bit more if you are doing hybrid development.

Change directory into the new app project and start the ionic server.  This will launch the app in your default browser.

$ cd demoApp
$ ionic serve

Add a button

Let’s add a button to the bottom of the first tab.  Tapping this button will open and close our fan-out menu.

Open the file www/templates/tab-dash.html.  Add the code

<button id=share-button>
Share
</button>

just before   </ion-content>.

Open the file www/css/style.css.  Add the code

#share-button {
position: relative;
top: 0px;
}

Assigning a relative position with no offset may seem strange, but play along for a moment.  Hopefully the reason will become clear in a few more steps.

Save the files.  Because of live reload, the browser will instantly refresh and your new button will appear.

Share button.png

Add a single menu item

Let’s start with one menu item so we see how this works, then build from there.

Inside the button tag in tab-dash.html, add the following code:

<img class=menu-fan
src="https://www.facebookbrand.com/img/fb-art.jpg"
style="left:40px;top:40px"/>

This will add a Facebook icon image inside the button offset down and to the right.

Add some styling to the css file for the class called menu-fan that is applied to the item in our fan-out menu:

.menu-fan {
height: 20px;
width: auto;
position: absolute;
z-index: 1000;
opacity: 1;
}

Save the files.  You should now see the Facebook icon.

Facebook icon.png

Next, we need a way to have tapping the Share button to alternately hide and show the Facebook icon.  Angular has a built-in way to do that with ng-show. We will have taps on the share-button toggle the boolean value of a scope variable we will name showMenu using the ng-click directive.  We will then bind the ng-show directive of the Facebook img to that variable.  The html should look like this:

html-01.png

Save.  Now clicks will cause the Facebook icon to alternately appear and disappear.

A note on positioning… The icon img is being positioned using absolute positioning with the left and top offsets being specified in the style attribute.  This is why the Share button needed to be explicitly positioned as ‘relative’.  The rule is that absolute positioning is relative to the img’s first positioned (not static) ancestor element.  Positioning the Share button as relative stops the ancestor search for a non static element.

Add some bling

Without a lot of imagination, you can see how we could create several icons for multiple social media outlets, fan them out in an arc and associate a scope action to each so that tapping on any one of them would invoke a routine that does what we want.  We will do that in a moment.  But right now, the Facebook icon just appears and disappears.  Not very exciting. Let’s add a little pizazz to this one icon first, then add in the others.

The ng-hide directive simply adds an ng-hide class to the img when the user wants it hidden.  By default, all that does is apply the style “display: none !important” to the img.  But, you can actually override what the ng-hide class does!

Go back to the style.css stylesheet.  Add the following to the menu-fan class:

    -webkit-transition:all linear 0.2s;
-moz-transition:all linear 0.2s;
-o-transition:all linear 0.2s;
transition:all linear 0.2s;

Also, create a new .menu-fan.ng-hide class like this:

.menu-fan.ng-hide {
opacity: 0;
-ms-transform: rotate(360deg); /* IE 9 */
-webkit-transform: rotate(360deg); /* Safari */
transform: rotate(360deg); /* Standard syntax */
}

The entire stylesheet should look like this now:
stylesheet-complete.png

Save the file and click on Share.  Now when the Facebook icon appears, it spins as it moves into position from the upper left corner of the Share button.  It spins back and disappears on the next click.  Let’s look at how the CSS we added does that.

First, we added a transition to the menu-fan class.  What this says is that any time an attribute is changed on the object, it should transition according to the given rule (there are four rules here to ensure cross-browser compatibility).  This particular rule says that it should apply to all attribute changes and that the change should be applied linearly over 0.2 seconds.  There are a lot of transition options that you could experiment to add even more pizazz.

Next, we defined some attributes that will apply to the object only when the .ng-hide class is applied to it.  The first two are positional – when the object hides, we want it to move to the upper-left corner of the Share button.  The transition attribute means that it will move back and forth linearly over 0.2 seconds.  Opacity of 0 means it is transparent, so as it moves, the img will also fade in and out.

The transform attribute is probably the least obvious.  rotate(360deg) means the element will spin around one revolution as it moves and fades over 0.2 seconds.  Here again, you have a number of transforms you can apply if you are looking for a different effect.

Expand to multiple menu items

Ok, so let’s scale this out.  Let’s say you want to add four social outlets – Facebook, Instagram, Pinterest and Twitter.  You could calculate the top and left values required for each to make a nice arc and add an img definition for each one of them in the html – hard coding the position, image source location and action you want performed.  But there may be a better way.  First off, the Angular ng-repeat directive is made for this sort of stuff.  It repeats over an array of items and builds the html you need.  We should really be using that.  So instead of code that looks like this:

html-02.png

It will look like this:

html-03.png

  • ng-repeat – creates the img element for each item in a scope array called socialFanButtons.  We will define that array in a minute.
  • ng-src – the src of the img will be whatever the value of the src property of the array element is.
  • ng-click – clicking this image will invoke the function defined by the action property.
  • style – This will now pick up the left and right properties of the array element.

So now, we could just create an array in the DashCtrl controller like:

controllerArray

Well, that does work, but let’s use the computer to do what it was made for – to compute.  Let’s create a routine that will compute the left and top values so we don’t have to.  We will define an array with the src and action fields we need, but then create a function that will compute the left and top values we need based on a direction(degrees), spread angle(degrees) and distance(radius).

Conclusion

So here you see that by leveraging AngularJS directives, some creative CSS and some geometry, we have created a reusable pattern including a procedure and a stylesheet to generate fan-out menus.

“Containing” your MobileFirst server

IBM MobileFirst Platform Foundation version 7.1 adds support for running the platform in a container on the cloud in IBM Bluemix.  The idea is that anyone should be able to spin up a MobileFirst server in the cloud, deploy applications and adapters to it and reconfigure mobile apps to use it quickly and easily.  I thought I would give it a try with my Internet of Things Doorbell app and see how easy it really is.

Here is the architecture of my doorbell system.  The only parts that should be affected are the MobileFirst Platform Foundation (MFP) server component and the native iOS Mobile App itself.

Doorbell Architecture

Doorbell Architecture

MobileFirst 7.1 provides two options for moving to containers: the prebuilt Getting Started Image and the ability to roll your own image based with Evaluation on Containers.

The Getting Started Image is really meant for kicking the tires and evaluating MFP on Bluemix.  It is not at all intended for production.  The server, data proxy, analytics server, database and a sample application are all crammed into one container.  You also cannot make any runtime changes such as renaming the prebuilt runtime, adding more runtimes or customizing the runtime such as adding back-end code, SSL certificates, etc.  The up-side is that you should be able to deploy the prebuilt image very quickly and easily add your own adapter and application.

The Evaluation on Containers is meant for building production deployments.  Why it is called “Evaluation” when Getting Started is really for evaluations is a mystery to me.  At any rate, Evaluation on Containers is not a prebuilt container.  It is actually a “package” of artifacts you use to build your own custom container in Docker.  You build the container locally on your machine then push it up to Bluemix Containers to run it.

For today’s experiment, I’m going to use the prebuilt Getting Started Image.

Create the MFP container

Creating the container is similar to creating an application on Bluemix.

  1. Log into Bluemix at https://ibm.biz/IBM-Bluemix.  If you don’t yet have a Bluemix ID, get one for free by clicking the SIGN UP link in the upper right corner of the landing page.
  2. From the Dashboard, click START CONTAINERS.

    Start Containers

    Start Containers

  3. On the next page, there will be a list of all the available prebuilt container images.  Click ibm-mobilefirst-starter.  This is the Getting Started Image.

    ibm-mobilefirst-starter container image

    ibm-mobilefirst-starter container image

  4. This brings up the creation page for the container.  There are a few things that need to be configured on this page which may not be obvious but are very important.
    1. Provide a Container name.  This can be whatever you want but must be unique within your “space”.
    2. Select the Size of the container.  The Knowledge Center recommends at least the “Small” configuration.
    3. The Public IP address field defaults to Leave Unassigned.  You definitely don’t want that.  The container will need a public IP so that the iOS app can connect to it.  If a public IP has already been allocated to the account, select it here.  If not, select Request and Bind Public IP.
    4. Finally, assign the Public ports that will be opened.  It is required that ports 80, 9080 and 443 be opened.  This field is a little confusing.  The port numbers must be entered separated by commas.  The placeholder example shows them also separated by spaces but the field actually does not allow spaces.  Enter 80,9080,443.

    GettingStarted Container creation page

    GettingStarted Container creation page

  5. Click Create to deploy the container.  After only a few seconds, “Your container is running” will appear on the container’s overview page.  Note the public IP address that was bound to the container.  This is how users and apps will get access to it.

    Container Overview Page

    Container Overview Page

Register with the Container

The admin user that will be used to access the container requires a password.  This is the opportunity to define it.  Please be more creative with your password than “admin”.  Anyone that knows the public IP address of the container will be able to get to this page and gain access to the Operations Console if they guess the password.

  1. Open a browser tab to the Public IP address. (i.e. http://134168.6.227, in my case).  The first time this URL is opened, the container registration page will be shown.  Enter a password here for the admin username.
  2. Click Register.

    Container Registration Screen

    Container Registration Screen

  3. After a few seconds, the MobileFirst Operations Console will appear.  There is one existing runtime, MobileFirstStarter, which contains three applications and three adapters.  These support the WishList sample application.

Accessing sample code, Operations Console and Analytics

The next time the container’s Public IP address is opened, a landing page will be shown that enables you to download the WishList sample application code, open the Operations Console to manage your applications, open the Analytics Console to view analytics and data about your apps and download server logs for troubleshooting.

Container Landing Page

Container Landing Page

Update the MobileFirst project

Some modifications to the settings in the MobileFirst project containing the application and adapters are necessary to prepare the artifacts for deployment to the new runtime.  I have a MobileFirst project called DoorbellIOSNative that contains the CloudantAdapter adapter and an AIOiOS SDK app.  In my case, the adapter is completely portable and will require no changes to run it on a new rutime.  I will, however, need to modify the SDK app’s worklight.plist file to point to my new host/port and the name of the new runtime.

  1. Open the DoorbellIOSNative/apps/APIiOS/worklight.plist file in an editor.
    1. Replace the host value with the Public IP of the container.
    2. Replace the port value with 9080.
    3. Replace the wlServeContext value with “/MobileFirstStarter/”
  2. Rebuild the project and prepare it for deployment.  From the MFP command line tool, use the command ‘mfp push‘.  From MFP Studio in Eclipse, Run As > Run on MobileFirst Development Server.

Deploy the adapter and application to the container

Now the application and adapter need to be deployed to the MobileFirst runtime running in the container on Bluemix.

  1. Open the MobileFirst Operations Console (http://<your-public-ip&gt;:9080/worklightconsole).
  2. Click Add new app or adapter.  Select the CloudantAdapter.adapter file in the bin directory of your MobileFirst project.
  3. Repeat the process for the APIiOS-iOSnative-1.0.wlapp application file.

Update the native iOS application

The application itself needs to be updated so that it will look in the right place for the MFP server.

  1. Open the Xcode project.
  2. Update the references
    1. If the worklight.plist and WorklightAPI artifacts were added to the project by NOT checking “Copy items if needed”, then there is nothing else to do before rebuilding.  The artifacts in Xcode are links to the actual artifacts that were already updated.
    2. If copies of these artifacts were made, delete them now.  Drag-n-drop the new worklight.plist and WorklightAPI artifacts onto the Xcode Project Navigator to add them.
  3. Build and run the app on a device.  The application is now hosted on an MFP server in the cloud!

Again to be clear, the Getting Started Image is not intended for production workloads.  I’ll take a look at building a custom container that can be used for production in a future post.  But the Getting Started Image did enable me to deploy my existing app to a functional MFP server running in the cloud on IBM Bluemix in about 10 or 15 minutes.  When you compare that with the thought of standing up a server, installing MFP Server, creating a runtime, deploying the adapter, deploying the application and configuring the client app, it’s pretty compelling.

My Internet of Things and MobileFirst adventure – Part 8: Build the mobile app

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Finally, we come to creating the mobile app that will tie this all together.  This is going to involve a lot of steps so I will cover the basic outline of what I did without getting into every line of code.  You can download all the source files for this project (including the Node-RED flows) from IBM Bluemix DevOps Services if you want to see the details.

But at a high level, here’s what needs to be done:

  1. Add the IBM Mobile Push service to the Bluemix application.  This will manage the push notifications with APNS (Apple Push Notification Service).
  2. Add nodes to the Bluemix Node-RED flow to initiate a push request when it receives the visitorAlert event.  This is how Bluemix will tell the mobile app that the doorbell was pressed.
  3. Create a MobileFirst project.  This will house an adapter that will retrieve the file from Cloudant as well as generate the SDK library the iOS app will use.
  4. Create a new Xcode project to house the native iOS app
  5. Add the MobileFirst SDK to the Xcode project.
  6. Add the Bluemix SDK to the project.
  7. Write code

Add the IBM Mobile Push service to the Bluemix application

  1. Open the Bluemix application.
  2. Click ENABLE APP FOR MOBILE and then YES and then CONTINUE.
  3. Click ADD A SERVICE OR API  and select Push from the Mobile category, then click USE and then RESTAGE.  I should point out that there is a Push iOS 8 service as well.  I chose not to use it and stuck with the older Push service.  Part of the reason was that Push iOS 8 depends on the newer Advanced Mobile Access and I already knew using it would require customizing my Bluemix Node-RED instance to support it.  I took the path of least resistance, but at some point, I will need to adopt AMA to keep up with the times.
  4. At the top of the application’s overview page, you will now see the app’s registration credentials.  You will need these later to tell the mobile app how to connect to Bluemix.

Configuring Apple Push is not for the faint of heart.  First of all, it requires you to have an Apple Developer license.  You can buy a personal license for $99 per year if you don’t have access to a corporate license.  Once you have a license, you must use the Apple Developer Portal to create a Device ID, an App ID and a Provisioning Profile as well as an SSL certificate that must be used by any application that wants to request APNS to send a push notification.  That process is out of scope for this blog, but you can read about it in the iOS Developer Library.

Once you have your SSL key, you will need to register it with the IBM Bluemix Push Service.

  1. Go to your Bluemix application and click the Push service.
  2. Select the Service Mode Sandbox or Production, depending on if you have a Development or Production SSL key from Apple.

    Bluemix Push Dashboard

    Bluemix Push Dashboard

  3. Click EDIT under APNS.
  4. Upload your p12 SSL key and enter the password for it.

It is possible to use the IBM Push dashboard to send out test notifications.  This can be helpful in debugging the process.

Bluemix Push Notification tab

Bluemix Push Notification tab

Add nodes to Bluemix Node-RED to initiate Push notifications

Node-RED on Bluemix includes nodes for IBM Push.  However, I was not able to get them to work as expected.  So I chose to go about this through brute force with HTTP nodes.  This has the advantage of demonstrating more of the details of how the Bluemix Push SDKs work, but I should really revisit this someday.

  1. Open your Bluemix Node-Red flow editor.
  2. Add a function node.  Its input should be connected to the output of the visitor Alert Event IBM IoT In node.  Name it send notification.
    1. Add the following code.  Yes, I am cheating.  the picture filename is not a URL, but I got lazy and decided to use the URL element to pass it.
      var newmsg = {};
      newmsg.headers = { 
          "IBM-Application-Secret" : "<your-secret>" };
      newmsg.payload = {
          "message" : {
              "alert"    : "Someone is at the door!  " +
                     "Would you like to see who it is?",
              "url"      : msg.payload.pictureFilename
          },
          "settings" : {
              "apns"   : {
                  "sound" : "doorbell/Doorbell.mp3"
              }
          },
          "target" : {
              "platforms" : ['A']
              }
      };
        
      return newmsg;
      
    2. Replace <your-secret> with the value of your application secret code from the application’s overview page.
  3. Add an http request function node (note – this is not an http input nor http output node).  Its input should be connected to the output of the send notification function node.
    1. Set its method to POST.
    2. Set the URL to https://mobile.ng.bluemix.net:443/push/v1/apps/<your-appId>/messages where <your-appId> is the application appId found on the overview page.
    3. Set the return type to UTF-8 String.
  4. Add debug nodes if you want.

    Final Bluemix Node-RED flow

    Final Bluemix Node-RED flow

Create a MobileFirst project

I’m going to leverage the IBM MobileFirst Platform Foundation for this mobile application.  This is, of course, not necessary to use Bluemix services.  You can access Bluemix services directly from an iOS native app.  I’m not going to demonstrate them in this tutorial, but the MobileFirst Foundation provides a long list of capabilities that make developing applications easier and more secure.  What it does mean is that there will be a server-side component called an adapter that will do the actual Cloudant communication.  The iOS app will use the MobileFirst SDK to invoke routines on the adapter.

To create the MobileFirst project, you need to have either the MobileFirst Studio extension to Eclipse or the MobileFirst command line interface installed.  I’m going to be using the command line interface here.

  1. Create a new project from the command line with ‘mfp create DoorbellIOSNative‘.
  2. cd DoorbellIOSNative
  3. Start the mfp Liberty server with ‘mfp start‘.
  4. Create an adapter that will retrieve the Cloudant data with ‘mfp add adapter CloudantAdapter --type http‘.
    1. Implement the adapter logic.  The simplest way to do that is to copy the CloudantAdapter.xml and CloudantAdapter-impl.js files from this MobileFirst tutorial.
    2. Be sure to edit the domain, username, and password values in the xml file so they reflect the values from your Bluemix overview page.
  5. Add a new iOS native API using $ mfp add api APIiOS -e ios.  There are two configuration files you need to edit in the project’s /apps/APIiOS folder.  Use this MobileFirst tutorial as a guide:
    1. The worklight.plist file contains several settings that your iOS app will need to know in order to connect to the MobileFirst server which hosts the adapter.
    2. The application-descriptor.xml file defines the application name, bundleID and version for the MobileFirst server.
  6. Once you have finished configuring these files, deploy the project to the MobileFirst server with ‘mfp push‘.

Create an Xcode project

The Xcode project will contain your iOS native app source code.  Create a blank project and create an app with a UIImageView to contain the picture.  If you are not that comfortable coding, you can download my project from IBM Bluemix DevOps Services.  Developing the app UI is not really the focus here, but rather showing how you configure the project to access the MobileFirst adapter and IBM Push from Bluemix.

Add the MobileFirst SDK to the Xcode project

When you created the MobileFirst project above, two things were created that you will need to add to your Xcode project – the worklight.plist file and the WorklightAPI folder.  You also need to add several frameworks to the Xcode project to get access to the libraries you will need.  The IBM MobileFirst Platform Foundation Knowledge Center does a good job of explaining how to do this.

Add the Bluemix SDK to the project

The SDKs you need for IBM Push are included with a bundle of SDKs for Bluemix.  You can download them from the IBM Bluemix Docs.  The only tricky part here is that these libraries are Objective-C libraries.  Since I coded my iOS app in Swift, I had to create an Objective-C Bridging Header.  This is a really easy way to expose Objective-C code to Swift applications.

Write code

There are multiple places in the AppDelegate and ViewController where I needed to add custom code.  The two main functions I needed to deal with were configuring the app to receive and handle push notifications and then to use the MobileFirst adapter to retrieve the image.

Push notification configuration

didFinishLaunchingWithOptions

This is sort of the main routine for an iOS application.  It is invoked when the app launches.  Here is where you need to have the app register for push notifications with the following code:

        // ========================================
        //  Register for push notifications
        // ========================================
        // Check to see if this is an iOS 8 device.
        let iOS8 = floor(NSFoundationVersionNumber) > floor(NSFoundationVersionNumber_iOS_7_1)
        if iOS8 {
            // Register for push in iOS 8
            let settings = UIUserNotificationSettings(forTypes: 
                UIUserNotificationType.Alert | 
                UIUserNotificationType.Badge | 
                UIUserNotificationType.Sound, categories: nil)
            UIApplication.sharedApplication().registerUserNotificationSettings(settings)
            UIApplication.sharedApplication().registerForRemoteNotifications()
        } else {
            // Register for push in iOS 7
            UIApplication.sharedApplication().registerForRemoteNotificationTypes(
                UIRemoteNotificationType.Badge | 
                UIRemoteNotificationType.Sound | 
                UIRemoteNotificationType.Alert)
        }

Notice that the way you register for push changed with iOS 8 so the code here first determines the OS level, then does what it needs to do.

didRegisterForRemoteNotificationsWithDeviceToken

This method is invoked by the framework when the app has successfully registered with APNS.  Here is where you want to initialize the Bluemix and Push SDKs.  You would replace the placeholders with the route, appId, and appSecret from your Bluemix application.

        // Initialize the connection to Bluemix services
        IBMBluemix.initializeWithApplicationId(
            "<your-app-id>",
            andApplicationSecret: "<your-app-secret>",
            andApplicationRoute: "<your-app-route>")
        
        pushService = IBMPush.initializeService()
        if (pushService != nil)  {
            var push = pushService!
            push.registerDevice("testalias",
                withConsumerId: "testconsumerId",
                withDeviceToken: self.myToken).continueWithBlock{task in
            
                if(task.error() != nil) {
                    println("IBM Push Registration Failure...")
                    println(task.error().description)
                    
                } else {
                    println("IBM Push Registration Success...")
                }
                return nil
            }
        } else {
            println("Push service is nil")
        }

didReceiveRemoteNotification

This method is invoked by the framework whenever a push notification is received.  The payload of the push notification is passed in in the userInfo object.  In a series of ‘if let’ statements, I extract the bits of information I need.  Then I determine if the app was in the background or inactive.  If so, that means the user already saw the notice, read it and chose to invoke it by tapping on it.  In that case, go straight to the code that gets the picture from Cloudant and loads it into the UIImageView.  If the app was in the foreground, create a message alert and only load the image if the user taps OK.

        if let aps = userInfo["aps"] as? NSDictionary {
            if let alert = aps["alert"] as? NSDictionary {
                if let fileName = userInfo["URL"] as? NSString {
                    if let message = alert["body"] as? NSString {
                        if let sound = aps["sound"] as? NSString {
                            if (application.applicationState == UIApplicationState.Inactive ||
                                application.applicationState == UIApplicationState.Background) {
                                    println("I was asleep!")
                                    self.getPicture(fileName)
                                    
                            } else {
                                var noticeAlert = UIAlertController(
                                    title: "Doorbell",
                                    message: message as String,
                                    preferredStyle: UIAlertControllerStyle.Alert)
                                
                                noticeAlert.addAction(UIAlertAction(
                                    title: "Ok",
                                    style: .Default,
                                    handler: { (action: UIAlertAction!) in
                                        println("User wants to see who's at the door")
                                        println(fileName)
                                        self.getPicture(fileName)
                                }))
                                noticeAlert.addAction(UIAlertAction(
                                    title: "Cancel",
                                    style: .Default,
                                    handler: { (action: UIAlertAction!) in
                                        println("Handle Cancel Logic here")
                                }))
                                
                                let fileURL:NSURL = NSBundle.mainBundle()
                                    .URLForResource(
                                        "Doorbell", 
                                        withExtension: "mp3")!
                                
                                var error: NSError?
                                self.avPlayer = AVAudioPlayer(
                                    contentsOfURL: fileURL, 
                                    error: &error)
                                if avPlayer == nil {
                                    if let e = error {
                                        println(e.localizedDescription)
                                    }
                                }
                                
                                self.avPlayer?.play()
                                
                                // Display the dialog
                                self.window?.rootViewController?
                                    .presentViewController(
                                        noticeAlert, 
                                        animated: true, 
                                        completion: nil)   
                            }   
                        }
                    }
                }
            }
        }

Retrieve image from Cloudant

didFinishLaunchingWithOptions

When the app first starts up, connect to the MobileFirst Platform Server.  I’m leaving out some details here, but basically you need to call wlConnectWithDelegate.  That method takes a listener parameter, but that listener is really trivial in my case.

        let connectListener = MyConnectListener()
        WLClient.sharedInstance().wlConnectWithDelegate(connectListener)

getpicture

This routine, called when a user chooses to see the picture through the push notifications, will use the MobileFirst adapter procedures to search, then retrieve the image from Cloudant.

        // Search Cloudant for the document with the given fileName
        let searchRequest = WLResourceRequest(
            URL: NSURL(string: "/adapters/CloudantAdapter/search"), 
            method: WLHttpMethodGet)
        let queryValue = "['pictures','ddoc','pictures',10,true,'fileName:\"" + 
            (fileName as String) + "\"']"
        searchRequest.setQueryParameterValue(
            queryValue,
            forName: "params")
        searchRequest.sendWithCompletionHandler { 
           (WLResponse response, NSError error) -> Void in
            if(error != nil){
                println("Invocation failure. ")
                println(error.description)
            }
            else if(response != nil){
                let jsonResponse = response.responseJSON
                if let rows = jsonResponse["rows"] as AnyObject? as? NSArray {
                    if (rows.count > 0) {
                        if let row = rows[0] as AnyObject? as? Dictionary<String,AnyObject> {
                            if let doc = row["doc"] as AnyObject? as? Dictionary<String,AnyObject> {
                                if let payload = doc["payload"] as AnyObject? as? Dictionary<String,AnyObject> {
                                    var base64String : String = payload["value"] as! String
                                    
                                    // Strip off prefix since UIImage doesn't seem to want it
                                    let prefixIndex = base64String.rangeOfString("base64,")?.endIndex
                                    base64String = base64String.substringWithRange(
                                        Range<String.Index>(
                                            start: prefixIndex!, 
                                            end: base64String.endIndex))
                                    // Strip out any newline (\n) characters
                                    base64String = base64String.stringByReplacingOccurrencesOfString(
                                        "\n", 
                                        withString: "")
                                    
                                    // Convert to NSData
                                    let imageData = NSData(
                                        base64EncodedString: base64String, 
                                        options: NSDataBase64DecodingOptions.IgnoreUnknownCharacters)
                                    
                                    // Create an image from the data
                                    let image = UIImage(data: imageData!)
                                    
                                    // Stuff it into the imageView of the ViewController
                                    AppDelegate.vc!.updatePicture(image!)
                                    
                                }
                            }
                        }
                    }
                }
            }
        }

Testing

With all this in place, I build the app in Xcode and deploy it to my iPhone (remember, it must be a physical device because APNS doesn’t work with a simulator).  My app screen looks like this:

Doorbell App - Initial Screen

Doorbell App – Initial Screen

I walk up to my door and press the doorbell button.  The Node-RED flow on the Raspberry Pi takes a picture, stores it on the local file system and sends an MQTT message to the IBM Internet of Things Foundation broker.  The Node-RED flow running on Bluemix in the cloud receives the message and tells the Raspberry Pi to upload the picture to the Bluemix Cloudant database.  At the same time, Node-RED uses the IBM Push service, also running on Bluemix, to send a remote notification to the app on my phone.  The app receives the push notification and presents an alert to me.

Doorbell App - Push Notification

Doorbell App – Push Notification

I tap Ok to see who is at the door.  The iOS app uses the IBM MobileFirst SDK to invoke adapter routines to first search for, then retrieve, the picture from the Bluemix Cloudant database.  It then pops that picture onto my mobile screen so I can see who is ringing my doorbell, even if I am hundreds of miles from home and can’t do a thing about it.

Doorbell App - Someone at the Door

Doorbell App – Someone at the Door

Conclusion

Well, that about does it.  This has been quite an adventure.  Clearly this isn’t an app you would put into production.  I have over-engineered several areas and it is far more complicated than a doorbell needs to be.  But we did get a chance to explore several technologies including:

  • Setting up a Raspberry Pi with an external button input and a camera
  • Node-RED visual editor for controlling and interacting with the Internet of Things
  • Bluemix applications and services including Cloudant Databases, Node.js runtimes and mobile Push.
  • Registering and app with Apple Push Notification Service.
  • IBM MobileFirst Platform Foundation for enterprise security and access to Systems of Record such as Cloudant databases
  • Using Xcode to create a native iOS application that receives push notifications from IBM Bluemix Push and retrieves data from Bluemix Cloudant datastores.

What’s next?  Well, IBM just released MobileFirst Platform Foundation version 7.1 which supports deploying the MobileFirst server in a container in Bluemix – built on Docker technology.  Maybe I will see if I can eliminate my local laptop entirely and have everything except the Raspberry Pi in the cloud!

My Internet of Things and MobileFirst adventure – Part 7a: Further elaboration…

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Well, I have been giving some more thought to the decisions I made in my last post and I think I need to reconsider how I am going to transfer the picture from the Raspberry Pi to the mobile app.  After I discovered the payload limits on the IBM IoT Foundation broker, I decided to push on with MQTT by using the Mosquitto test server.  Now, I know this could work.  I could have the mobile app use something like the Eclipse Paho project Javascript libraries to connect and subscribe directly to the broker topic to receive the picture.  But the more I think about it, the more convoluted that sounds. OK, yes, this whole project is a bit contrived – there is no real reason to involve all these moving parts to build a picture-taking doorbell app, but using a second MQTT broker just seems really unnatural.  If I have to switch away from IBM IoT Foundation to transfer the picture, I just as well give myself the opportunity to involve something completely new and different, right?

That’s when I thought of Cloudant.  IBM Cloudant is a NoSQL database as a service (DBaaS) that also runs in Bluemix.  There are Cloudant nodes for Node-RED that I could run directly on the Raspberry Pi (I wouldn’t even need to send the picture file to Node-RED on Bluemix).  I could use the Cloudant SDK’s to read the picture from a mobile app directly or from a MobileFirst adapter.  With this new concept in mind, I reworked my architecture diagram a bit.

Revised Architecture

Revised Architecture

The red line is the new connection.  Node-RED on Bluemix actually uses a Cloudant database to store flows, etc., but I didn’t think that was all that relevant, so I left that line off the diagram.

To reverse my Mosquitto decision and move to the Cloudant approach, I need to do three things:

  1. Create a Cloudant database on Bluemix to store the picture.
  2. Modify the Raspberry Pi Node-RED flow to send the picture to Cloudant instead of Mosquitto.
  3. Cleanup the Bluemix Node-RED flow.

Create a Cloudant database

I could create a whole new Cloudant database service on Bluemix to store the pictures coming from Raspberry Pi, but the Bluemix application I created from the Node-RED Starter boilerplate back in Part 6 already has a Cloudant service bound to it so I decided to just create a new database within that service.

  1. From the Bluemix Dashboard, click your Node-RED application.
  2. Note the Cloudant NoSQL DB service tile.  If you click show credentials on that tile, you will see the creds you will need later on to enable the Raspberry Pi (and other clients) to connect to the database.
  3. Click on the Cloudant NoSQL DB service tile, then click Launch.  This opens up the Cloudant database dashboard.  There is already one database running in this service – “nodered”.  This is the database that Node-RED uses to store its artifacts so let’s not muck with that.
  4. Click Add new database and name it pictures.

Later, we are going to need to be able to search the Cloudant documents by the “fileName” attribute.  Searching in Cloudant means you need to create a design document which defines the index by which you want to search.

  1. Click the + next to All Design Docs and choose New Search Index.  Save to design document New Document.  Index name pictures.  Add the following code as the Search index function:
    function(doc){
       if(doc.payload.fileName){
           index("fileName", 
               doc.payload.fileName, {"store": "yes"});
       }
    }
  2. Leave the other settings at their defaults and click Save & Build index.

Modify Node-RED flow on Raspberry Pi to use Cloudant

The Raspberry Pi Node-RED doesn’t include nodes for Cloudant out of the box, but we can add node-red-node-cf-cloudant similar to the way we added node-red-contrib-gpio at the end of Part 3.

  1. Open an ssh terminal session to the Raspberry Pi.
  2. Issue the commands
    cd ~/.node-red
    npm install node-red-node-cf-cloudant
  3. Stop and restart node-RED.  Now you will see Cloudant storage nodes in the palette.

    Cloudant nodes in the palette

    Cloudant nodes in the palette

  4. Remove the mqtt Picture queue node from the Send Picture flow.
  5. Drag a cloudant out storage node onto the flow.  Its input should be the output of the createJSON node.  Configure the cloudant out node.
    1. Choose External service
    2. Configure a new Server.
      1. Host, Username and Password are the connection credentials from the Bluemix service I said you would need.
      2. Name is just a name you give to the server configuration so you can reference it again if needed.
    3. Database should be set to pictures
    4. Leave Operation as insert
    5. Name the node insertPicture.
  6. Deploy the flow.  You will get a warning that your test.mosquitto.org configuration node is unused.  Node-RED keeps connection configurations like this around just in case you want to reuse them.  Click Confirm deploy for now.  If you want to remove these configurations, you can select configuration nodes from the hamburger menu on the right.
  7. Test it by pressing the doorbell button.  The Process Doorbell flow should send an IBM IoT visitorAlert event message to Node-RED on Bluemix which should, in turn, send an IBM IoT sendPicture command message back to Node-RED on Raspberry Pi.  This will eventually create a Base64 encoded image string, package it up with the filename, length and status into a payload that gets sent to Cloudant in Bluemix by the insertPicture node.  See if it worked by going to the Bluemix Cloudant dashboard.  You should see one Document in the pictures database that looks something like this.

Cloudant Document

Clean up the Bluemix Node-RED flow

Now let’s remove the MQTT node that receives the picture from the Bluemix Node-RED flow.

  1. Open Node-RED on Bluemix.
  2. Select and delete the MQTT Picture queue node and the debug node attached to it.
  3. Deploy the flow.

Ok, we are finally ready to create a mobile app and wire it up to Bluemix Push and Cloudant to complete the circle.  That’s the next post.

Part 6: Enabling Push Notifications

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

When we left of in the last post, I had a Raspberry Pi that is capable of taking a picture using its camera and storing that picture in a file.  It can also sense that a physical pushbutton has been pressed – a doorbell.  I also have my Bluemix services configured to manage Push notification requests and to broker IoT messages.  Let’s start wiring things together by configuring Push notifications.

Build an iOS app that can receive push notifications

Note: This app evolved and changed a lot as I kept changing my mind on how I wanted to do things.  Again, you may not see why I decided to implement things the way I did until later.  Just play along for now.

I’m writing a Native iOS app in Swift.  I’ll share the entire app in GitHub at the end of this series, but for now I will just show you code snippets of the relevant parts.

To receive remote push notifications, mobile apps must register to receive them. There are really two parts to this, registering with the Apple Push Notification Service to enable the app to receive notifications and registering with IBM Push Notifications service to enable it to send the requests on behalf of the Raspberry Pi app.

Register for remote notifications with APNS

The following code defines two notification actions and registers them with APNS to enable the app to display alert boxes with these buttons, play sounds and update badge counts in response to remote notifications:

You call this procedure from the AppDelegate’s didFinishLaunchingWithOptions method.

Register the device token with IBM Push Notifications service

You know registration with APNS was successful when the system calls your AppDelegate’s didRegisterForRemoteNotificationsWithDeviceToken method, so here’s where you register with IBM Push Notification service, passing along the device token.

Again, realize that this is not complete code, but rather just the important parts.

Invoke a Push notification from the Raspberry Pi

I’m going to change up the Node-RED flows that I created in Part 4.  The flow I had created caused the Pi to capture a picture as soon as the doorbell was pressed.  I am going to have Node-RED only send a push notification to the app when the doorbell is pressed.  The picture won’t actually be taken until the user requests it in response to receiving the Push notification on the iPhone app.  No, I don’t think this is the best design either, but again, I am using this as an opportunity to learn as much as I can about IoT, so this design will enable me to eventually send MQTT commands from the app to the Pi.

There is a Node-RED node package that sends push notification requests to the IBM Push Notifications service.  But what I found is that this node only accepts a simple string to use as the message alert string.  The IBM Push Notifications service can optionally accept a JSON object that defines a number of other items.  I would like to provide some of these other items in my Push message so I am going old school and using an HTTP node instead.  This additionally illustrates how the IBM Push Notifications service can be invoked with a simple HTTP POST.

Raspberry Pi Node-RED flow

I modified my Node-RED flow to the following:

Node-RED flow for processing the doorbell

Node-RED flow for processing the doorbell

The “send notification” and “push” nodes are new and require a little explanation.

HTTP Request node

I did not customize this node at all, with the exception of naming it.  The address, headers and content of the message will be provided in the incoming message from the function node.

Send Notification function node

  • Line #3:  The pushbutton will send a message (“1”) when pressed and a message (“0”) when released.  Also, in order to simulate a button press for testing, I added a timestamp Inject node as an input to the node as well.  I only want the node to be triggered when the payload is NOT 0 – meaning either the pushbutton was pressed (not released) or the message is a timestamp.
  • Line #5:  This request will use the HTTP POST verb
  • Line #6:  The URL address of the request.  You will need to replace “your-appID-here” with the AppGUID from your Bluemix application
  • Line #7:  The appSecret HTTP header value needs to be the appSecret from your Bluemix IBM Push Notifications service.
  • Line #8 – #22:  This defines the payload message that will be sent to the IBM Push Notifications service.  For a full explanation, click Model to the right of body at https://mobile.ng.bluemix.net/imfpushrestapidocs/#!/messages/post_apps_applicationId_messages.

Testing

Once Deployed, clicking on the timestamp node should cause the function node to create a message and the push node to send it.  You can confirm by verifying the output of the msg node in the debug view.  You should see a statusCode of 202 and a header x-backside-transport value of “OK OK”.

Assuming you have added the code above for registering for remote notifications and registering the device token with IMFPush to even a simple iOS app, you should also see the notification appear on your device.

Diagnosing problems with Push notifications is difficult.  You either get the notification on the mobile device or you don’t.  There are no diagnostic messages returned from APNS or log files created.  The most likely cause of a push not finding its way to your phone is that the SSL certificate wasn’t created or installed properly.

Next

The next thing is to set up the basics required for the app to request a picture and the Pi to respond by sending it to the app via IoT Foundation.

That will be the next blog post in this series.

Part 5: Set up the Bluemix application environment

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Today I am going to configure my Bluemix environment that will talk to my Raspberry Pi.  I already have a Bluemix account, but you can go to https://ibm.biz/IBM-Bluemix and get your own free account.

Create a Bluemix application

Bluemix provides a number of boilerplates.  Boilerplates are packages of services already bundled together.  This is usually the easiest way to get started.

  1. Click Catalog and then MobileFirst Services Starter to create a new application based on the boilerplate.
  2. Enter a name for your app.  It has to be unique, so consider including your initials or something in the name.  Click Create.
  3. Bluemix will start cranking through the creation process.  First your application will be staged, then created.  Once it says your app is running, the creation process is done.

Add the IBM Internet of Things Foundation service

IBM Bluemix provides a service called Internet of Things Foundation.  This is sort of a registry service and a message broker all in one.  You can register your device in the Internet of Things, then communicate with it through IoT nodes in Node-RED.

  1. From your Bluemix App’s Overview page, click Add a service or API.  Search or scroll to the bottom and select Internet of Things Foundation.
  2. Your app will need to be restaged to add the service.  Wait until App Health indicates your application is running again.

Add the Raspberry Pi to the Internet of Things Foundation

  1. Once the app is running again, click the Internet of Things Foundation tile.
  2. Click Launch Dashboard.
  3. From the Overview tab, click Add a device.
  4. Create a device type.  This can be anything you want, say myPi.
  5. You can optionally choose the attributes you want to assigned for each device of this type.  You can skip this for now if you want.  If you choose to add attributes, you will be asked to supply defaults on the next screen.  You can also add additional metadata in JSON format.
  6. After entering all that information about the device type, you will return to the Add Device wizard.  The only bit of information here that is absolutely required is the Device ID.  That is the device’s MAC address.  To find it, go back to your Raspberry Pi command line and type ifconfig eth0.  The output will look something like this:

    ifconfig output

    ifconfig output

  7. Type the string of characters following the HWaddr (leaving out the colons) into the Bluemix IoT dialog.  In my example, it would look like this:

    IoT MAC Address entry

    ifconfig output

  8. Click through the rest of the screens until you get to the page displaying Your Device Credentials.  You will be shown a box with the Authentication Token.  Copy this down immediately!  You will not be given a second chance to see it!

Configuring the IBM Push Notifications service

Next, I’ll configure the IBM Push Notifications service.  This is one of those cases where in reality, I came back and did this later, but since we are in Bluemix now, let’s set it up here.

The IBM Push Notifications service is, in a sense, a notification broker.  Applications that want to sent push notifications to devices (in my case, the Raspberry Pi), will send message requests to the IBM Push Notifications service via HTTP REST commands.  The IBM Push Notifications service will, in turn, dispatch the request to the notification services for the appropriate platform.  In my case, I an going to be writing the app on iOS, so the IBM Push Notifications service will invoke the APNS (Apple Push Notification Service), which will send a remote notification to my device.  In order to receive the notification, the device app software will need to register with the IBM Push Notifications service.

Configuring Apple Push is not for the faint of heart.  First of all, it requires you to have an Apple Developer license.  You can buy a personal license for $99 per year if you don’t have access to a corporate license.  Once you have a license, you must use the Apple Developer Portal to create a Device ID, an App ID and a Provisioning Profile as well as an SSL certificate that must be used by any application that wants to request APNS to send a push notification.  That process is out of scope for this blog, but you can read about it in the iOS Developer Library.

Once you have your SSL key, you will need to register it with the IBM Bluemix Push Service.

    1. Go to your Bluemix application Overview and click the IBM Push Notifications service.
    2. Click Setup Push.
Bluemix Push Dashboard

Bluemix Push Dashboard

  1. Click Choose file under Apple Push Certificate.  (Note that the process would be similar for Google Cloud Messaging for Android devices.)
  2. Upload your p12 SSL key and enter the password for it.

It is possible to use the IBM Push Notifications dashboard to send out test notifications.  This can be helpful in debugging the process.

That takes care of configuring the Bluemix services in the cloud.  You have two primary services:

Internet of Things Foundation

  • Acts as a broker for MQTT messages between IoT Applications and Devices
  • Maintains registration information for all your Devices

IBM Push Notifications

  • Acts as a broker for remote notifications between requesting applications and the vendor cloud messaging services.
  • Provides a manual Push testing interface

That takes care of the system setup tasks.  In the next post, I will start implementing my solution and configure my Raspberry Pi Node-RED flows to send Push Notifications.  In a later post, I will enable my mobile application to request a picture from the Raspberry Pi and the Raspberry Pi to return the picture using the IoT Foundation.

Part 4: Adding in the camera

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Now I have the Raspberry Pi running Node-RED and can control a pushbutton and LED.  Cool.  And then, the mail carrier came!  My camera module arrived!

Camera module installation is pretty easy.  Just follow the Raspberry Pi Documentation.  There is an easy command-line utility called raspistill that I used to test out the camera.

Now there is no out-of-the-box node to control the camera module from Node-RED.  I never found any Googling around the internet either.  But that’s OK – this gave me an opportunity to see how extensible Node-RED is.  I would have to integrate the Raspicam Node.js library into function nodes.

Making Raspicam accessible in Node-RED

First, I had to install the raspicam module into Node.js.  Npm package manager makes that easy enough, but again, make sure you are in the right directory.

cd ~/.node-red
npm install raspicam

Now, that makes raspicam accessible in Node.js, but there is an additional step required to make it accessible within Node-RED.  See the Global Context section in the Node-RED Documentation.  In ~/.node-red/settings.js, I added

functionGlobalContext: {
        RaspiCam:require('raspicam')
},

Bounce Node-RED and you should be able to access raspicam commands inside Node-RED modules like

 var camera = new context.global.RaspiCam( opts );

Adding a function node

Now I’m ready to add some code to control the camera.  This is done with a function node.

  1. Drag a function node onto the canvas and connect it like this
    Function Node added to test flow

    Function Node added to test flow

    A couple things to note here.  First, you can have more than one node wired to a node’s input.  In this case, both the Pin 12 node and the function node will send information to the msg.payload debug node.  We already saw this in the previous post because the Pin 12 node and timestamp injection node send output to the trigger node.  Secondly, you can have more that one node take input from a node’s output.  In this case, the trigger node will pulse the LED, but it will also send the trigger to the function node, which we will use to take a picture.

  2. Double-click the function node.  Name it Take Picture.  Paste the following code into the Function area:
    // We only want to take pictures when 
    // the button is pressed, not when it is released. 
    if (msg.payload == 1) { 
        var encoding = "png"; 
        var currTime = new Date().getTime();
    
        // Use the current timestamp to ensure
        // the picture filename is unique.
        var pictureFilename = "/home/pi/pictures/" + currTime + "." + encoding;
        var opts = {
            mode: "photo",
            encoding: encoding,
            quality: 10,
            width: 250,
            height: 250,
            output: pictureFilename,
            timeout: 1};
    
        // Use the global RaspiCam to create a camera object.
        var camera = new context.global.RaspiCam( opts ); 
    
        // Take a picture
        var process_id = camera.start( opts ); 
    
        // Send the file name to the next node as a payload.
        return {payload: JSON.stringify(
            {pictureFilename : pictureFilename}) };
    }
  3. Deploy the flow.  You see two messages in the debug view. The first is the message from the Pin 12 node.  The second is the message created by the “Take Picture” node.  That node has told the camera to take a picture (if you were paying attention, you would have seen the red light on the camera module flash) and has sent a message with the filename to the debug node.
    Deploy messages

    Deploy messages

    There is a problem here.  The Pin 12 node is configured with “Read initial state of pin on deploy/restart?” checked.  This is causing my flow to trigger and a picture to be taken when I don’t want it to.  I’ll fix that before verifying the camera worked.

  4. I’ll double-click the Pin 12 node and deselect “Read initial state of pin on deploy/restart?” and Deploy again.  This time I get no new debug messages.
  5. Press the pushbutton on the breadboard.  The camera light will blink as it takes a picture and the LED connected to Pin 22 will pulse for 2 seconds.
  6. I’ll open up a VNC session, find the picture file in /home/pi/pictures with the name in the debug message and open it.  The picture isn’t all that exciting.  My Pi camera is sitting on my desk, pointing toward my laptop monitor.  But it proves that things are working.

    Viewing picture on Pi VNC

    Viewing picture on Pi VNC

Ok, so now I have a Raspberry Pi and a Node-RED flow triggered by an external button that will take a picture using the Pi camera module and store the picture in a file on the device.  I’m making progress but so far I only really have a “thing”, not an “internet of things”.  In the next post, I’ll set up my Node-RED environment on Bluemix so I have something to communicate with.