Skip to content

Latest commit

 

History

History
116 lines (92 loc) · 4.24 KB

File metadata and controls

116 lines (92 loc) · 4.24 KB

Lab: Sending a file to Watson Recognition for classification

Overview

This flow will allow you to create a webpage, select a picture from your file system and submit it into the Visual Recognition service via Node-RED. You will receive the image classifications both on the webpage and in the Node-RED debug tab. The lab makes use of an HTTP Multipart node and a file buffer node.

Prerequisites and setup

This lab assumes you have prior knowledge of the Visual Recognition Service and/or have completed the Visual Recognition lab.

Before starting this lab, you should:

  • have an IBM Cloud account
  • created an instance of Node-RED
  • created a Visual Recognition service in IBM Cloud

Please refer to the Node-RED setup lab for instructions.

Step 1 - Create the HTML webpage

Add a HTTP input, template node and HTTP response node to the Node-RED canvas and wire them together. HTTP Flow

Double click on the HTTP in node and configure it as follows: HTTP In Settings

Double click on the template node and name the node 'Form and Response Template'. Paste in the following HTML:

<html>
    <body>
       <form action="/classify" method="post" enctype="multipart/form-data">
           <input type="file" name="pic" accept="image/*"><br>
           <input type="submit" value="Submit">
       </form>
       <div>Classifications:</div>
       <div>
           {{#result}}
           <table>
           <tr>
               <th>Class</th>
               <th>Score</th>
               <th>Type</th>
           </tr>
           {{#images}}
           {{#.}}
           {{#classifiers}}
           {{#.}}
           {{#classes}}
           {{#.}}
               <tr>
                   <td>{{class}}</td>
                   <td>{{score}}</td>
                   <td>{{&type_hierarchy}}</td>
               </tr>
           {{/.}}
           {{/classes}}
           {{/.}}
           {{/classifiers}}
           {{/.}}
           {{/images}}
           </table>
           {{/result}}
       </div>
    </body>
</html>

Step 2 - Install new nodes

In Node-RED, go to Options and Manage Palette.

Click install and search for node-red-contrib-file-buffer. Then install the node.

File Buffer Install

After it has installed, search for node-red-contrib-http-multpart and install that node.

HTTP Multipart Install

Step 3 - Build the Full Flow

Add on to the canvas:

  • HTTP Multipart Node
  • Function Node
  • File Buffer Node
  • Visual Recognition Node
  • Debug Node

and wire them together as follows: Full Flow

Step 4 - Configure the nodes

Double click on the HTTP Multipart node and configure it as follows: HTTP Multipart Settings

Double click on the function node. Name the node 'Determine File Path' and paste in the following code:

if (msg.req.files) {
    var files = Object.keys(msg.req.files);
    msg.payload = msg.req.files[files[0]][0].path;
}
return msg;

For the File Buffer node, ensure that it is set to 'buffer' rather than 'stream'. File Buffer Settings

Configure the settings for the Visual Recognition Node. If you haven't bound your Node-RED application to your Visual Recongition service, you need to enter an API key. Make sure that the 'Detect' field is set to Classify an Image.

Visual Recognition Settings

For the debug node, change the output from msg.payload to msg.result. Debug Settings

Deploy the application and test the flows out by going to: https://<your_application>.<region>.mybluemix.net/homepage and uploading an image from your file system. After uploading the image and pressing submit, the webpage should redirect to /classify with the image classifications and in the Node-RED debug tab, you should see the same classifications.

Webpage Results Debug Results

The final flow is shown below and can be downloaded here. Final Flow