Exploring the internet through REST APIs or how to build an HTTP client

Around six months ago, Google began marketing a bundle of fascinating APIs with the launch of their Cloud Platform. Using a simple REST interface, one could translate between any two languages on the planet (Translation API), determine the content of an image (Vision API), extract meaning from natural language (Cloud Natural Language API), and much more. I had a dream of running the image posted by WHOZ CHILLIN through the Vision API to let me know whether the tennis courts were free and I could run over and play. While it turns out, that is drastically out of the scope of the service, it was a nice idea at the time.

Anyway, the only problem was that I had no idea how to use a REST API. I knew that one somehow sent a request to a web service and somehow received a response and that the whole process was supposed to be easy, but I had no idea as to how it was actually implemented. All of the engineers with whom I worked ensured me that the WHOZ CHILLIN to Vision API project would be trivial, so I was too embarrassed to ask for help.

As I rolled up my sleeves and went to work, I learned that the all communication on the internet happens over HTTP and it is possible to make four kinds of requests: GET, POST, PUT, and DELETE.

GET is what happens when you visit a website with a browser. Your browser gets something (most simply, a static webpage) from some server and displays it on your screen, You can see how this works with Wikipedia’s REST API by pasting

https://en.wikipedia.org/w/api.php?action=query&titles=McKinnon&prop=revisions&rvprop=content&format=jsonfm

into your browser. Here, your browser is submitting a GET request to Wikipedia’s API found at https://en.wikipedia.org/w/api.php and Wikipedia is returning a JSON list of all of the articles that contain McKinnon in the title. You just as easily could make this request through your own HTTP client or at the command line, as we will learn shortly.

POST, PUT, and DELETE request all involve sending in addition to receiving information. When you fill out a web form, you are typically POSTing, when you need to override an existing attribute, you are typically PUTting, and DELETing is self-evidence. There are several other REST calls as well, but I have never seen them in practice.

Before building an HTTP client, it is helpful to get a better understanding of how these calls actually work using both the command line and a program called Postman. The command line contains a tool called curl that lets you make HTTP requests right from the terminal window. Try entering

curl –request GET http://swapi.co/api/people/1/

into the terminal. The API will return a a JSON string full of information about Luke Skywalker. Changing the 1 to a 2 will help you learn about C-3PO. Swapi.co has actually build an extensive open (no authentication, which is usually the hardest part in real development) REST API allowing you to query facts about the Star Wars universe while learning about HTTP requests.

However, curl requests can get very tedious to keep track of. Postman is a free program that lets you save and repeat HTTP requests, which is enormously helpful in debugging as interactions between machines get more and more complicated. This same Luke Skywalker request through Postman is shown below.

With a simple GET request, the difference between curl and Postman is negligible, but once you start passing headers and tokens and data around, testing using curl becomes unmanageable.

Now that you know how to use the basic tools, it’s time to write our first client. HTTP clients can be written in any language: Swift, JavaScript, Visual Basic, Python, Java, etc. For the web, which is where most of the requests machinery lives, Python, with its beautiful library, Requests: HTTP for Humans, is easiest and will be used for these demos.

To start off, it’s helpful to chose a project. Think of some IoT-like device that you would like to control from the internet. For this first example, I will use my Philips Hue lightbulbs, whose REST API is very well documented at https://www.developers.meethue.com/.

Next, begin prototyping your code with Postman. It is pointless to begin writing your code unless you know exactly what requests you will be making, so I would recommend writing down the entire flow on paper. For my Hue app, I knew I needed to POST {“devicetype”:”APP_NAME#USERNAME”} to http://10.0.0.5/api to create a user and PUT {“on”:”true”, “sat”:saturation, “bri”:brightness,”hue”:color} to http://10.0.0.5/api/USERNAME/lights/LIGHT_NUMBER/state to turn on a light and change its colors. This is very simple from requests perspective, but I still made sure to test all of this in Postman and make sure that I could indeed change the state of the lights from my computer.

After sketching out the flow, open up a text file with your favorite text editor, download my sample code from Github, and create your own project so your fans and admirers can collaborate. Because, we will be leveraging the Requests library, the first line should read

import requests

Once the library has been imported you can use

requests.get, requests.put, requests.post, and requests.delete

The format is requests.TYPE(API_ENDPOINT, json = {whatever you want to send}). Sometimes it takes some experimentation to determine whether the json above should be a header or a data, but there are not too many options. For example, to turn bulb i to a dim, blue hue, use the line below.

requests.put(‘http://’+IP+’/api/’+username+’/lights/’+str(i)+’/state’, json = {“on”:”true”, “sat”:255, “bri”:10,”hue”:46920})

You can see in the sample code that I loop through two lights in my room and keep the IP and the username at the top, so I only need to change them in one place if need arises.

Because I don’t include any error handling in this code, go ahead and run it on your machine. I give the user an option to turn the lights in my room to dim blue, the lights in the kitchen and the living room to mellow candle, and randomly oscillate the lights throughout the entire house. You can see it is quite straightforward. Discotequa Kitchen is on display below.

Now let’s try a more complicated HTTP client. For this one, we will be using Pix4D’s cloud processing API to turn photos into a 3D model like the one below. This API is not public, like the Hue API, but does represent a fairly typical web service.

First, take a look at my sample code on Github. You’ll notice this is much more complex than the Hue app. In addition to requests, I am importing boto, an Amazon S3 uploading library, os, a library for crawling the local file structure, and simplejson, a library to improve the display of JSON. I also have a ClientID, ClientSecret, and password. On a side note, it is very important not to upload these to Github because scrapers run all over the internet looking for these keywords. Scanning through the code, you’ll notice I defined a few functions, have the ability to both process and check jobs, and make an awful lot of HTTP requests.

However, this more complex client is exactly like the Hue example above and can be built in the exact same way. Initially, I sketched out all of the HTTP requests I needed to make. Following along in the code, you can see that I request a token (GET), save the response, create a mission using those credentials (POST), save the returned URL, post the pictures to the mission (POST), and initiate processing (GET). While there are a few tricky intermediate steps like uploading the images to S3 and walking through the local file structure to generate a list of images, the requests piece is actually very straightforward.

I hope that after all of this you feel comfortable building your own HTTP client. If someday you ever accomplish my original vision (ha) by linking WHOZ CHILLIN and the Vision API, enabling you to analyze a drone video stream in realtime, be sure and let me know!

 

 

Sharing drone data in Mapbox for fun and profit

Autonomous flying vehicles (like Site Scan!) capture all kinds of interesting GIS data–photos, projected photos, and flight logs, to name a few. Furthermore, drone imagery can be processed into other GIS data products like orthomosaic images and point clouds that become infinitely more valuable when combined with traditional GIS data sources like property boundaries and civil engineering diagrams. That said, sharing these data off-line is quite challenging. Massive geotiffs are painful to open in heavy GIS software like ArcMap and QGIS, individual images are troublesome to keep track of when not associated with a map, and flight logs are near useless in their raw .tlog or .bin form. Our friends at Mapbox have assembled an incredible (and free up to a substantial quota) toolkit for sharing all of these data in a simple webview format.

Follow these instructions to create an interactive map showcasing your drone data like the one below, whose source code is available at https://github.com/dmckinno/Mapbox. For this particular example, I overlaid a property boundary and fantastical building footprint kmls, an orthomosaic geotiff, and a handful of images viewable by clicking on the location at which they were taken, but essentially any GIS vector or raster data are sharable using Mapbox.

    1. Create an account over at https://www.mapbox.com/studio/signup/.
    2. Open Mapbox Studio. If you are not interested in adding interactivity to your maps, you can share all of your drone data without writing a single line of code. Simply upload your data a a tileset and add your tileset to a style.upload-tilesetMapbox accepts raster data in the geotiff format and vector data in the kml, which can be generated from Arducopter .tlogs using Mission Planner, gpx, which is the standard output from handheld GPS devices, GeoJSON, a GIS standard that is easily passed between systems, .shp, an open ESRI format, and .CSV formats. After uploading any of these file formats, Mapbox will slice the data into either vector or raster tiles that can be beautifully navigating on the web.Note you are unable to display a single nadir photo or a point cloud using Studio. To display a single nadir image in Mapbox, you must write a few lines of JavaScript (see below for examples). If you would like to display a point cloud, you must first use laslib or a commercial tool like ArcMap to convert to a geotiff.Once you have uploaded your tileset, create a new Style.styleChoose the desired basemap, add your tileset(s) to the map, and click “Publish.”mapbox-studioMapbox will take you to the “Preview, Develop, and Use.” Copy and paste the generated URL and send it to all of your friends. This particular example is available here.preview-and-develop
    3. Great! That’s the easy part. Static maps are nice, but Mapbox has a host of excellent examples that guide you in creating interactive maps displaying a wealth of different data types.First, the structure. Please clone my example repo at https://github.com/dmckinno/Mapbox to follow along. Tilesets uploaded via Mapbox Studio cannot be manipulated by the user viewing them in the browser. To build the types of interactivity that I showed in the example above, you must write a few lines of JavaScript. Fortunately, this is quite straightforward and nicely mirrors the GUI options in Studio. To add a layer, use

      map.on(‘load’, function () {
          map.addSource(‘ortho’, {
              type: ‘raster’,
              url: ‘mapbox://dmckinno.8u0goq8l’
          });
          map.addLayer({
              “id”: “ortho”,
              “type”: “raster”,
              “source”: “ortho”,
              “source-layer”: “OrthoImage-6x2l6d”,
          });
      });

      This is a combination of the map.on and map.addLayer functions. I don’t know enough about JavaScript to explain exactly why they are both needed, but they are always together. I believe a layer is created and then placed on the map. In this case, I am adding a raster from a given Mapbox URL (this is creating by concatenating mapbox:// and the Map ID shown inside the tileset view) and placing it on the map with the ID “ortho.”map-idI always keep the source and the ID the same, but I’m sure some Mapbox wizard understands why they may be different in certain cases. The source-layer comes from the individual layer within the tileset. The is irrevelevant for rasters, but important if a vector tileset contains several different features. Nonetheless, the source-layer is available in the “Raster tileset” or “Vector tileset” tab in the tileset view.

      source-layerVector layers are added in a similar fashion.

      map.on(‘load’, function () {
          map.addSource(‘views’, {
              type: ‘vector’,
              url: ‘mapbox://dmckinno.9erzotdj’
          });
          map.addLayer({
              “id”: “views”,
              “type”: “circle”,
              “source”: “views”,
              “source-layer”: “redhorse_photos-6besux”,
              “paint”: {
                  “circle-color”: “#ffffff”,
                  “circle-radius”: 3
              }
          });
      });

      The only differences arise in the options available for displaying the data beautifully. Vector data can be painted with different colors, thicknesses, fills, and opacities. Here, I simply drew white circles that indicate photo locations, but more complex vector layers can be displayed in infinitely complex and beautiful ways. The Mapbox documentation walks through the options in painstaking detail.

    4. Now you have your orthomosiac and raster data loaded, but you may want to augment the orthomosaic with individual nadir images. If you calculate the coordinates of the four corners of the image using the altitude of your drone and the focal length of the camera using simple geometry, you can display nadir images as psuedo-orthomosaics using the code below.

      map.on(‘load’, function () {
          map.addSource(‘photo1’, {
              type: ‘image’,
              url: ‘http://www.ddmckinnon.com/wp-content/uploads/2016/09/DJI_0202.jpg’,
              coordinates: [
                  [-105.2311759, 40.0848768],
                  [-105.2335889, 40.0851527],
                  [-105.2333181, 40.0865392],
                  [-105.2309051, 40.0862633]
              ]
              });
          map.addLayer({
              “id”: “photo1”,
              “type”: “raster”,
              “source”: “photo1”,
          });
      });

      It is somewhat challenging to embed two Mapbox maps in a single WordPress post (I use Code Embed, but please let me know if a better way exists), but an interactive map is available here and an image preview below. Note that because the image is not tiled, it is much less responsive than a true tiled orthomosaic.nadir-image

    5. The final vector layer type you may be interested in displaying is a image locations. It is often helpful to show where images were taken on a map and be able to view them based on context, whether project appropriately on the basemap or not. To do this, you must create a .kml, .shp, or .geojson file from image EXIF data. If you only have a few images, you can easily do this manually, but if I would recommend using the exif-to-geojson tool for more than a handful. This tool worked beautifully for the fifty or so photos I posted from my off-road motorcycle trip from Boulder to Moab. Once you have the geojson file containing all of the photo information, add a description field with the image URL and any text you like to each image. You can see another nice example of this in the map of my motorcycle trip.

      “data”: {
          “type”: “FeatureCollection”,
              “features”: [{
                  “type”: “Feature”,
                       “properties”: {
                          “description”: “<img src=”YOUR IMAGE URL HERE”> and any text or                       links you like”
                      },
                      “geometry”: {
                          “type”: “Point”,
                          “coordinates”: [YOUR LONGITUDE, YOUR LATITUDE]
                      }
              }]
      }

    6. Now that you’ve added all of your layers, you likely want to include some interactivity. I’ve played with all of the Mapbox examples, but most relevant here is the ability to show and hide layers and show images upon a click. For my example above, I copied and pasted from Mapbox with minor modifications and I recommend you do the same, making sure that you grab every line that you need.
    7. And voila! Now you have all the tools you need to share your drone data with all of your friends and followers using Mapbox and a few lines of JavaScript.