Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 2 additions & 6 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -178,10 +178,6 @@ _version.py
TODO.*
docs/

# Examples
*.db
*.pem

# Taskfile
.keys/
.task/
Expand All @@ -195,5 +191,5 @@ Taskfile.yml

*.exe

# Audio files
*.wav
# Other
design/
40 changes: 0 additions & 40 deletions design/code-containers-release.md

This file was deleted.

25 changes: 0 additions & 25 deletions design/core_bricks.md

This file was deleted.

70 changes: 0 additions & 70 deletions design/declarative.md

This file was deleted.

20 changes: 0 additions & 20 deletions design/imperative.md

This file was deleted.

4 changes: 2 additions & 2 deletions models/models-list.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1131,7 +1131,7 @@ models:
source-model-url: "https://studio.edgeimpulse.com/studio/757509/live"
private: true
bricks:
- arduino:keyword_spotter
- arduino:keyword_spotting
- updown-wave-motion-detection:
runner: brick
name : "Continuous motion detection"
Expand Down Expand Up @@ -1182,4 +1182,4 @@ models:
source-model-url: "https://studio.edgeimpulse.com/public/749446/live"
private: true
bricks:
- arduino:audio_classifier
- arduino:audio_classification
13 changes: 6 additions & 7 deletions src/arduino/app_bricks/audio_classification/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,8 @@ def stop(self):
"""
super().stop()

def classify_from_file(self, audio_path: str, confidence: int = None) -> dict | None:
@staticmethod
def classify_from_file(audio_path: str, confidence: float = 0.8) -> dict | None:
"""Classify audio content from a WAV file.

Supported sample widths:
Expand All @@ -77,9 +78,8 @@ def classify_from_file(self, audio_path: str, confidence: int = None) -> dict |

Args:
audio_path (str): Path to the `.wav` audio file to classify.
confidence (int, optional): Confidence threshold (0–1). If None,
the default confidence level specified during initialization
will be applied.
confidence (float, optional): Minimum confidence threshold (0.0–1.0) required
for a detection to be considered valid. Defaults to 0.8 (80%).

Returns:
dict | None: A dictionary with keys:
Expand Down Expand Up @@ -121,9 +121,8 @@ def classify_from_file(self, audio_path: str, confidence: int = None) -> dict |
features = list(struct.unpack(fmt, frames))
else:
raise ValueError(f"Unsupported sample width: {samp_width} bytes. Cannot process this WAV file.")

classification = super().infer_from_features(features[: int(self.model_info.input_features_count)])
best_match = super().get_best_match(classification, confidence)
classification = AudioClassification.infer_from_features(features)
best_match = AudioDetector.get_best_match(classification, confidence)
if not best_match:
return None
keyword, confidence = best_match
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,5 @@
# EXAMPLE_REQUIRES = "Requires an audio file with the glass breaking sound."
from arduino.app_bricks.audio_classification import AudioClassification

classifier = AudioClassification()

classification = classifier.classify_from_file("glass_breaking.wav")
classification = AudioClassification.classify_from_file("glass_breaking.wav")
print("Result:", classification)
112 changes: 91 additions & 21 deletions src/arduino/app_bricks/cloud_llm/README.md
Original file line number Diff line number Diff line change
@@ -1,39 +1,109 @@
# Cloud LLM brick
# Cloud LLM Brick

This directory contains the implementation of the Cloud LLM brick, which provides an interface to interact with cloud-based Large Language Models (LLMs) through their REST API.
The Cloud LLM Brick provides a seamless interface to interact with cloud-based Large Language Models (LLMs) such as OpenAI's GPT, Anthropic's Claude, and Google's Gemini. It abstracts the complexity of REST APIs, enabling you to send prompts, receive responses, and maintain conversational context within your Arduino projects.

## Overview

The Cloud LLM brick allows users to send prompts to a specified LLM service and receive generated responses.
It can be configured to work with a curated set of LLM providers that offer RESTful APIs, notably: ChatGPT, Claude and Gemini.
This Brick acts as a gateway to powerful AI models hosted in the cloud. It is designed to handle the nuances of network communication, authentication, and session management. Whether you need a simple one-off answer or a continuous conversation with memory, the Cloud LLM Brick provides a unified API for different providers.

## Features

- **Multi-Provider Support**: Compatible with major LLM providers including Anthropic (Claude), OpenAI (GPT), and Google (Gemini).
- **Conversational Memory**: Built-in support for windowed history, allowing the AI to remember context from previous exchanges.
- **Streaming Responses**: Receive text chunks in real-time as they are generated, ideal for responsive user interfaces.
- **Configurable Behavior**: Customize system prompts, temperature (creativity), and request timeouts.
- **Simple API**: Unified `chat` and `chat_stream` methods regardless of the underlying model provider.

## Prerequisites

Before using the Cloud LLM brick, ensure you have the following:
- An account with a cloud-based LLM service (e.g., OpenAI, Cohere, etc.).
- API access credentials (API key or token) for the LLM service.
- Network connectivity to access the LLM service endpoint.
- **Internet Connection**: The board must be connected to the internet to reach the LLM provider's API.
- **API Key**: A valid API key for the chosen service (e.g., OpenAI API Key, Anthropic API Key).
- **Python Dependencies**: The Brick relies on LangChain integration packages (`langchain-anthropic`, `langchain-openai`, `langchain-google-genai`).

## Features
## Code Example and Usage

### Basic Conversation

- Send prompts to a cloud-based LLM service.
- Receive and process responses from the LLM.
- Supports both one-shot requests and memory for follow-up questions and answers.
- Supports a curated set of LLM providers.
This example initializes the Brick with an OpenAI model and performs a simple chat interaction.

## Code example and usage
Here is a basic example of how to use the Cloud LLM brick:
**Note:** The API key is not hardcoded. It is retrieved automatically from the **Brick Configuration** in App Lab.

```python
from arduino.app_bricks.cloud_llm import CloudLLM
import os
from arduino.app_bricks.cloud_llm import CloudLLM, CloudModel
from arduino.app_utils import App

llm = CloudLLM(api_key="your_api_key_here")
# Initialize the Brick (API key is loaded from configuration)
llm = CloudLLM(
model=CloudModel.OPENAI_GPT,
system_prompt="You are a helpful assistant for an IoT device."
)

App.start_bricks()
def simple_chat():
# Send a prompt and print the response
response = llm.chat("What is the capital of Italy?")
print(f"AI: {response}")

response = llm.chat("What is the capital of France?")
print(response)
# Run the application
App.run(simple_chat)
```

### Streaming with Memory

This example demonstrates how to enable conversational memory and process the response as a stream of tokens.

```python
from arduino.app_bricks.cloud_llm import CloudLLM, CloudModel
from arduino.app_utils import App

App.stop_bricks()
# Initialize with memory enabled (keeps last 10 messages)
# API Key is retrieved automatically from Brick Configuration
llm = CloudLLM(
model=CloudModel.ANTHROPIC_CLAUDE
).with_memory(max_messages=10)

def chat_loop():
while True:
user_input = input("You: ")
if user_input.lower() in ["exit", "quit"]:
break

print("AI: ", end="", flush=True)

# Stream the response token by token
for token in llm.chat_stream(user_input):
print(token, end="", flush=True)
print() # Newline after response

App.run(chat_loop)
```

## Configuration

The Brick is initialized with the following parameters:

| Parameter | Type | Default | Description |
| :-------------- | :-------------------- | :---------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------- |
| `api_key` | `str` | `os.getenv("API_KEY")` | The authentication key for the LLM provider. **Recommended:** Set this via the **Brick Configuration** menu in App Lab instead of code. |
| `model` | `str` \| `CloudModel` | `CloudModel.ANTHROPIC_CLAUDE` | The specific model to use. Accepts a `CloudModel` enum or its string value. |
| `system_prompt` | `str` | `""` | A base instruction that defines the AI's behavior and persona. |
| `temperature` | `float` | `0.7` | Controls randomness. `0.0` is deterministic, `1.0` is creative. |
| `timeout` | `int` | `30` | Maximum time (in seconds) to wait for a response. |

### Supported Models

You can select a model using the `CloudModel` enum or by passing the corresponding raw string identifier.

| Enum Constant | Raw String ID | Provider Documentation |
| :---------------------------- | :------------------------- | :-------------------------------------------------------------------------- |
| `CloudModel.ANTHROPIC_CLAUDE` | `claude-3-7-sonnet-latest` | [Anthropic Models](https://docs.anthropic.com/en/docs/about-claude/models) |
| `CloudModel.OPENAI_GPT` | `gpt-4o-mini` | [OpenAI Models](https://platform.openai.com/docs/models) |
| `CloudModel.GOOGLE_GEMINI` | `gemini-2.5-flash` | [Google Gemini Models](https://ai.google.dev/gemini-api/docs/models/gemini) |

## Methods

- **`chat(message)`**: Sends a message and returns the complete response string. Blocks until generation is finished.
- **`chat_stream(message)`**: Returns a generator yielding response tokens as they arrive.
- **`stop_stream()`**: Interrupts an active streaming generation.
- **`with_memory(max_messages)`**: Enables history tracking. `max_messages` defines the context window size.
- **`clear_memory()`**: Resets the conversation history.
5 changes: 2 additions & 3 deletions src/arduino/app_bricks/cloud_llm/brick_config.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
id: arduino:cloud_llm
name: Cloud LLM
description: "Cloud LLM Brick enables seamless integration with cloud-based Large Language Models (LLMs) for advanced AI capabilities in your Arduino projects."
disabled: true

variables:
- API_KEY
- name: API_KEY
description: API Key for the cloud-based LLM service
Loading