Skip to content

Latest commit

 

History

History
94 lines (69 loc) · 4.53 KB

File metadata and controls

94 lines (69 loc) · 4.53 KB

Image Caption Bot Sample

A sample bot that illustrates how to use the Microsoft Cognitive Services Computer Vision API to analyze an image from a stream or a URL and return to the user the image caption.

Deploy to Azure

Prerequisites

The minimum prerequisites to run this sample are:

  • Latest Node.js with NPM. Download it from here.
  • The Bot Framework Emulator. To install the Bot Framework Emulator, download it from here. Please refer to this documentation article to know more about the Bot Framework Emulator.
  • Computer Vision App ID. You can obtain one from Microsoft Cognitive Services Subscriptions Page.
  • [Recommended] Visual Studio Code for IntelliSense and debugging, download it from here for free.
  • This sample currently uses a free trial Microsoft Cognitive service key with limited QPS. Please subscribe to Vision API services here and update the MICROSOFT_VISION_API_KEY key in .env file to try it out further.

Code Highlights

Microsoft Computer Vision API provides a number of methods that allows you to analyze an image. Check out Computer Vision API - v1.0 for a complete reference of the methods available. In this sample we are using the 'analyze' endpoint with the 'visualFeatures' parameter set to 'Description' https://westus.api.cognitive.microsoft.com/vision/v1.0/analyze?visualFeatures=Description

The main components are:

  • caption-service.js: is the core component illustrating how to call the Computer Vision RESTful API.
  • app.js: is the bot service listener receiving messages from the connector service and passing them down to caption-service.js.

In this sample we are using the API to get the image description and send it back to the user. Check out the use of the captionService.getCaptionFromStream(stream) method in app.js.

if (hasImageAttachment(session)) {
    var stream = getImageStreamFromMessage(session.message);
    captionService
        .getCaptionFromStream(stream)
        .then(function (caption) { handleSuccessResponse(session, caption); })
        .catch(function (error) { handleErrorResponse(session, error); });
}

And here is the implementation of captionService.getCaptionFromStream(stream) in caption-service.js.

/** 
 *  Gets the caption of the image from an image stream
 * @param {stream} stream The stream to an image.
 * @return {Promise} Promise with caption string if succeeded, error otherwise
 */
exports.getCaptionFromStream = function (stream) {
    return new Promise(
        function (resolve, reject) {
            var requestData = {
                url: VISION_URL,
                encoding: 'binary',
                headers: { 'content-type': 'application/octet-stream' }
            };

            stream.pipe(request.post(requestData, function (error, response, body) {
                if (error) {
                    reject(error);
                }
                else if (response.statusCode !== 200) {
                    reject(body);
                }
                else {
                    resolve(extractCaption(JSON.parse(body)));
                }
            }));
        }
    );
};

Outcome

You will see the following when connecting the Bot to the Emulator and send it an image URL:

Input:

Sample Outcome

Output:

Sample Outcome

You can also choose to upload an image directly to the bot:

Sample Outcome

More Information

To get more information about how to get started in Bot Builder for Node and Microsoft Cognitive Services Computer Vision API please review the following resources: