Platform for Situated Intelligence (or in short, \psi, pronounced like the greek letter) is an open, extensible framework for development and research of multimodal, integrative-AI systems. Examples include multimodal interactive systems such as social robots and embodied conversational agents, systems for ambient intelligence and smart spaces, applications based on small devices that work with streaming sensor data, etc. In essence, any application that processes streaming, sensor data (such as audio, video, depth, etc.), combines multiple (AI) technologies, and operates under latency constraints can benefit from the affordances the provided by the framework.
The framework accelerates the development of these applications by providing:
- a modern, performant infrastructure for working with multimodal, temporally streaming data
- a set of tools for multimodal data visualization, annotation, and processing
- an ecosystem of components for various sensors, processing technologies, and effectors
A high-level overview of the framework is available in this blog post. A webinar containing a brief introduction and tutorial on how to code with \psi is now available as an online video.
07/29/2021: Check out this new sample application which shows how you can integrate \psi with the Teams bot architecture to develop bots that can participate in live meetings! (Please note that although it is hosted in the Microsoft Graph repository, you should post any issues or questions about this sample here).
05/02/2021: We've opened the Discussions tab on the repo and plan to use it as a place to connect with other members of our community. Please use these forums to ask questions, share ideas and feature requests, show off the cool components or projects you're building with \psi, and generally engage with other community members.
04/29/2021: Thanks to all who joined us for the Platform for Situated Intelligence Workshop! In this workshop, we discussed the basics on how to use the framework to accelerate your own work in the space of multimodal, integrative AI; presented some in-depth tutorials, demos, and previews of new features; and had a fun panel on how to build and nurture the open source community. All sessions were recorded, and you can find the videos on the event website now.
04/14/2021: We uploaded a brief overview on Platform for Situated Intelligence as part of the Microsoft Innovation Tech Minutes series.
03/31/2021: We published a technical report containing a more in-depth description of the various aspects of the framework.
The core \psi infrastructure is built on .NET Standard and therefore runs both on Windows and Linux. Some components and tools are more specific and are available only on one or the other operating system. You can build \psi applications either by leveraging \psi NuGet packages, or by cloning and building the source code.
A Brief Introduction. To learn more about \psi and how to build applications with it, we recommend you start with the Brief Introduction tutorial, which will walk you through for some of the main concepts. It shows how to create a simple program, describes the core concept of a stream, and explains how to transform, synchronize, visualize, persist and replay streams from disk.
A Video Webinar. If you prefer getting started by watching a presentation about the framework, this video webinar gives a 30 minute high-level overview of the framework, followed by a 30 minute hands-on coding session illustrating how to write a first, simple application. Alternatively, for a shorter (~13 min) high-level overview, see this presentation we did as part of the Tech Minutes series.
Samples. If you would like to directly start from sample code, a number of small sample applications are also available, and several of them have walkthroughs that explain how the sample was constructed and point to additional documentation. We recommend you start with the samples below, listed in increasing order of complexity:
Name | Description | Cross-plat | Requirements |
---|---|---|---|
HelloWorld |
This sample provides the simplest starting point for creating a \psi application: it illustrates how to create and run a simple \psi pipeline containing a single stream. | Yes | None |
SimpleVoiceActivityDetector |
This sample captures audio from a microphone and performs voice activity detection, i.e., it computes a boolean signal indicating whether or not the audio contains voiced speech. | Yes | Microphone |
WebcamWithAudio for Windows or Linux |
This sample shows how to display images from a camera and the audio energy level from a microphone and illustrates the basics of stream synchronization. | Yes | Webcam and Microphone |
WhatIsThat |
This sample implements a simple application that uses an Azure Kinect sensor to detect the objects a person is pointing to. | Windows-only | Azure Kinect + Cognitive Services |
Documentation. The documentation for \psi is available in the github project wiki. It contains many additional resources, including tutorials, other specialized topics, and a full API reference that can help you learn more about the framework.
If you find a bug or if you would like to request a new feature or additional documentation, please file an issue in github. Use the bug
label when filing issues that represent code defects, and provide enough information to reproduce the bug. Use the feature request
label to request new features, and use the documentation
label to request additional documentation.
Please also make use of the Discussions for asking general questions, sharing ideas about new features or applications you might be interested in, showing off the wonderful things you're building with \psi, and engaging with other community members.
We are looking forward to engaging with the community to improve and evolve Platform for Situated Intelligence! We welcome contributions in many forms: from simply using it and filing issues and bugs, to writing and releasing your own new components, to creating pull requests for bug fixes or new features. The Contributing Guidelines page in the wiki describes many ways in which you can get involved, and some useful things to know before contributing to the code base.
To find more information about our future plans, please see the Roadmap document.
Platform for Situated Intelligence has been and is currently used in several industry and academic research labs, including (but not limited to):
- the Situated Interaction project, as well as other research projects at Microsoft Research.
- the MultiComp Lab at Carnegie Mellon University.
- the Speech Language and Interactive Machines research group at Boise State University.
- the Qualitative Reasoning Group, Northwestern University.
- the Intelligent Human Perception Lab, at USC Institute for Creative Technologies.
- the Teledia research group, at Carnegie Mellon University.
- the F&M Computational, Affective, Robotic, and Ethical Sciences (F&M CARES) lab, at Franklin and Marshall College.
- the Transportation, Bots, & Disability Lab at the Carnegie Mellon University.
If you would like to be added to this list, just file a GitHub issue and label it with the whoisusing
label. Add a url for your research lab, website or project that you would like us to link to.
A more in-depth description of the framework is available in this technical report. Please cite as:
@misc{bohus2021platform,
title={Platform for Situated Intelligence},
author={Dan Bohus and Sean Andrist and Ashley Feniello and Nick Saw and Mihai Jalobeanu and Patrick Sweeney and Anne Loomis Thompson and Eric Horvitz},
year={2021},
eprint={2103.15975},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
The codebase is currently in beta and various aspects of the framework are under active development. There are probably still bugs in the code and we may make breaking API changes.
Platform for Situated Intelligence is available under an MIT License. See also Third Party Notices.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
We would like to thank our internal collaborators and external early adopters, including (but not limited to): Daniel McDuff, Kael Rowan, Lev Nachmanson and Mike Barnett at MSR, Chirag Raman and Louis-Phillipe Morency in the MultiComp Lab at CMU, as well as researchers in the SLIM research group at Boise State and the Qualitative Reasoning Group at Northwestern University.