A comprehensive showcase of Azure AI services and capabilities through interactive demos.
This repository contains a Streamlit-based web application that demonstrates various Azure AI services in the domains of language, vision, and machine learning. It provides interactive examples of how to use Azure's cognitive services for tasks such as image analysis, face detection, OCR, sentiment analysis, language detection, speech services, and more.
The application is designed to be educational, allowing users to interact with Azure AI services through a user-friendly interface while also providing access to the source code for each demo.
-
Language Services
- Language Detection - Detect the language of input text
- Key Phrase Extraction - Extract key phrases from text
- Entity Extraction - Identify and extract entities from text
- Entity Linking - Link entities to known references
- Sentiment Analysis - Analyze sentiment in text
- Text Translation - Translate text between languages
- Text to Speech - Convert text to spoken audio
- Speech to Text - Convert spoken audio to text
- Speech to Speech - Translate spoken audio to another language
-
Computer Vision Services
- Image Analysis - Analyze images for content, tags, objects, and more
- Face Analysis - Detect and analyze faces in images
- Simple OCR - Extract text from images
- Complex OCR - Advanced text extraction from complex images
-
Machine Learning Services
- Classification - Predict cardiovascular disease risk using the Framingham Heart Study dataset
To run this application, you'll need:
-
An Azure account with the following services enabled:
- Azure Cognitive Services (for language and vision services)
- Azure Machine Learning (for prediction services)
-
The following environment variables set with your Azure service credentials:
AZ_COG_ENDPOINT
- Cognitive Services endpointAZ_COG_KEY
- Cognitive Services API keyAZ_COG_REGION
- Cognitive Services regionAZ_MLW_INFER_ENDPOINT_PREDICT_FRAMINGHAM_HEART
- Machine Learning endpoint for heart disease predictionAZ_MLW_INFER_ENDPOINT_KEY_PREDICT_FRAMINGHAM_HEART
- Machine Learning API key for heart disease prediction
-
Clone the repository:
git clone https://github.com/corticalstack/azure-ai-demo-gallery.git cd azure-ai-demo-gallery
-
Install the required dependencies:
pip install -r requirements.txt
-
Set up environment variables with your Azure credentials.
-
Run the Streamlit application:
streamlit run app.py
-
Build and run using Docker Compose:
docker-compose up --build
This will build the Docker image with your Azure credentials and start the application.
Q: Why does the app get downscaled in the evening?
A: As mentioned in the app's about section, the application consumes compute resources from a personal Azure subscription, so it is downscaled each evening to reduce costs.
Q: How can I add my own demo to the gallery?
A: Create a new Python script in the src/assets/scripts/
directory following the pattern of the existing demos. Then add your demo to the topic_demo
OrderedDict in app.py
.
Q: Do I need to have all the Azure services set up to run the application?
A: Yes, the application expects all the environment variables to be set. However, you could modify the code to make certain services optional if you only want to run specific demos.