π Your Personal AI Assistant β Fast, Private, and Easy to use
SwiftChat is a fast and responsive AI chat application developed with React Native and powered by Amazon Bedrock, with compatibility extending to other model providers such as Ollama, DeepSeek, OpenAI and OpenAI Compatible. With its minimalist design philosophy and robust privacy protection, it delivers real-time streaming conversations, AI image generation and voice conversation assistant capabilities across Android, iOS, and macOS platforms.
- π Support using Bedrock API Key for Amazon Bedrock models (From v2.5.0).
- π Support virtual try-on, automatically recognize clothes, pants, shoes and try them on (From v2.5.0).
- π Support shortcuts for macOS (From v2.5.0).
- Use
Shift + Enter
,Control + Enter
orOption + Enter
to add a line break. - Use
β + V
to add images (Screenshot), videos, or documents from your clipboard. - Use
β + N
to opening multiple Mac windows for parallel operations.
- Use
- Support adds multiple OpenAI Compatible model providers. You can now use Easy Model Deployer, OpenRouter, or any OpenAI-compatible model provider. (From v2.5.0).
- Supports dark mode on Android, iOS, and Mac (From v2.4.0).
- Support Speech to Speech By Amazon Nova Sonic on Apple Platform. (From v2.3.0).
- Download for Android
- Download for macOS
- For iOS: Currently available through local build with Xcode
Click Amazon Bedrock Model access to enable your models access.
You can choose one of the following two methods for configuration
π§ Configure Bedrock API Key (Click to expand)
-
Click Amazon Bedrock Console to create a long-term API key.
-
Copy and paste the API key to the (Amazon Bedrock -> Bedrock API Key) under SwiftChat Settings page.
-
The App will automatically get the latest model list based on the region you currently selected. If multiple models appear in the list, it means the configuration is successful.
π§ Configure SwiftChat Server (Click to expand)
By default, we use AWS App Runner, which is commonly used to host Python FastAPI servers, offering high performance, scalability and low latency.
Alternatively, we provide the option to replace App Runner with AWS Lambda using Function URL for a more cost-effective solution, as shown in this example.
-
Sign in to your AWS console and right-click Parameter Store to open it in a new tab.
-
Check whether you are in the supported region, then click on the Create parameter button.
-
Fill in the parameters below, leaving other options as default:
-
Name: Enter a parameter name (e.g., "SwiftChatAPIKey", will be used as
ApiKeyParam
in Step 2). -
Type: Select
SecureString
-
Value: Enter any string without spaces.(this will be your
API Key
in Step 3)
-
-
Click Create parameter.
-
Click one of the following buttons to launch the CloudFormation Stack in the same region where your API Key was created.
-
Click Next, On the "Specify stack details" page, provide the following information:
- Fill the
ApiKeyParam
with the parameter name you used for storing the API key (e.g., "SwiftChatAPIKey"). - For App Runner, choose an
InstanceTypeParam
based on your needs.
- Fill the
-
Click Next, Keep the "Configure stack options" page as default, Read the Capabilities and Check the "I acknowledge that AWS CloudFormation might create IAM resources" checkbox at the bottom.
-
Click Next, In the "Review and create" Review your configuration and click Submit.
Wait about 3-5 minutes for the deployment to finish, then click the CloudFormation stack and go to Outputs tab, you
can find the API URL which looks like: https://xxx.xxx.awsapprunner.com
or https://xxx.lambda-url.xxx.on.aws
- Launch the App, open the drawer menu, and tap Settings.
- Paste the
API URL
andAPI Key
(The Value you typed in Parameter Store) Under Amazon Bedrock -> SwiftChat Server, then select your Region. - Click the top right β icon to save your configuration and start your chat.
Congratulations π Your SwiftChat App is ready to use!
- US East (N. Virginia): us-east-1
- US West (Oregon): us-west-2
- Asia Pacific (Mumbai): ap-south-1
- Asia Pacific (Singapore): ap-southeast-1
- Asia Pacific (Sydney): ap-southeast-2
- Asia Pacific (Tokyo): ap-northeast-1
- Canada (Central): ca-central-1
- Europe (Frankfurt): eu-central-1
- Europe (London): eu-west-2
- Europe (Paris): eu-west-3
- South America (SΓ£o Paulo): sa-east-1
π§ Configure Ollama (Click to expand)
-
Navigate to the Settings Page and select the Ollama tab.
-
Enter your Ollama Server URL. For example:
http://localhost:11434
-
Enter your Ollama Server API Key (Optional).
-
Once the correct Server URL is entered, you can select your desired Ollama models from the Chat Model dropdown list.
π§ Configure DeepSeek (Click to expand)
- Go to the Settings Page and select the DeepSeek tab.
- Input your DeepSeek API Key.
- Choose DeepSeek models from the Chat Model dropdown list. Currently, the following DeepSeek models are supported:
DeepSeek-V3
DeepSeek-R1
π§ Configure OpenAI (Click to expand)
- Navigate to the Settings Page and select the OpenAI tab.
- Enter your OpenAI API Key.
- Select OpenAI models from the Chat Model dropdown list. The following OpenAI models are currently supported:
GPT-4o
GPT-4o mini
GPT-4.1
GPT-4.1 mini
GPT-4.1 nano
Additionally, if you have deployed and configured the SwiftChat Server, you can enable the Use Proxy option to forward your requests.
π§ Configure OpenAI Compatible models (Click to expand)
- Navigate to the Settings Page and select the OpenAI tab.
- Under OpenAI Compatible, enter the following information:
Base URL
of your model providerAPI Key
of your model providerModel ID
of the models you want to use (separate multiple models with commas)
- Select one of your models from the Chat Model dropdown list.
- Click the plus button on the right to add another OpenAI-compatible model provider. You can add up to 10 OpenAI-compatible model providers.
- Real-time streaming chat with AI
- Rich Markdown Support: Tables, Code Blocks, LaTeX and More
- AI image generation with progress
- Multimodal support (images, videos & documents)
- Conversation history list view and management
- Cross-platform support (Android, iOS, macOS)
- Tablet-optimized for iPad and Android tablets
- Fast launch and responsive performance
- Multiple AI models supported (Amazon Bedrock, Ollama, DeepSeek, OpenAI and OpenAI Compatible Models)
- Fully Customizable System Prompt Assistant
Comprehensive Multimodal Analysis: Text, Image, Document and Video
Creative Image Suite: Generation, Virtual try-on, Style Replication, Background Removal with Nova Canvas
System Prompt Assistant: Useful Preset System Prompts with Full Management Capabilities (Add/Edit/Sort/Delete)
Rich Markdown Support: Paragraph, Code Blocks, Tables, LaTeX and More
We redesigned the UI with optimized font sizes and line spacing for a more elegant and clean presentation. All of these features are also seamlessly displayed on Android and macOS with native UI
Note: Some animated images have been sped up for demonstration. If you experience lag, please view on Chrome, Firefox, or Edge browser on your computer.
- Support automatic setting of the main image, default is the previously used main image.
- Support uploading or taking the second image and sending it directly without any prompt words.
- Support automatically recognizes clothes, pants, shoes and tries them on
- Built-in spoken language practice for words and sentences, as well as storytelling scenarios. You can also add Custom System Prompts for voice chatting in different scenarios.
- Support Barge In by default, Also you can disable in system prompt.
- Support selecting voices in the settings page, including American/British English, Spanish and options for male and female voices.
- Support Echo Cancellation, You can talk directly to the device without wearing headphones.
- Support Voice Waveform to display volume level.
Learn Sentences
learn_sentences.mov
Telling Story on Mac (With barge in feature)
story_mac.mov
Note: Amazon Nova Sonic currently only available with SwiftChat server.
- Record 30-second videos directly on Android and iOS for Nova analysis
- Upload large videos (1080p/4K) beyond 8MB with auto compression
- Support using default template to make Nova Canvas generate images, remove backgrounds, and create images in similar styles.
Quick Access Tools: Code & Content Copy, Selection Mode, Model Switch, Regenerate, Scroll Controls and Token Counter
We feature streamlined chat History, Settings pages, and intuitive Usage statistics:
- Text copy support:
- Copy button at the bottom of messages, or directly click the model name or user title section.
- Copy button in code blocks
- Copy button in reasoning blocks
- Direct Select and copy code on macOS (double-click or long click on iOS)
- Long press text to copy the entire sentence (Right-click on macOS)
- Text selection mode by click selection button.
- Message timeline view in history
- Delete messages through long press in history
- Click to preview for documents videos and images
- Support for collapsing and expanding the reasoning section and remembering the most recent state
- Support image generation with Chinese prompts (Make sure
Amazon Nova Lite
is enabled in your selected region) - Long press images to save or share
- Automatic image compression to improve response speed
- Haptic feedback for Android and iOS (can be disabled in Settings)
- Support landscape mode on Android/iOS devices
- Double tap title bar to scroll to top
- Click bottom arrow to view the latest messages
- Display system prompt and model switch icon again by clicking on the chat title
- View current session token usage by tapping twice Chat title
- Check detailed token usage and image generation count in Settings
- In-app upgrade notifications (Android & macOS)
We have optimized the layout for landscape mode. As shown below, you can comfortably view table/code contents in landscape orientation.
The content in the video is an early version. For UI, architecture, and inconsistencies, please refer to the current documentation.
π Fast Launch Speed
- Thanks to the AOT (Ahead of Time) compilation of RN Hermes engine
- Added lazy loading of complex components
- App launches instantly and is immediately ready for input
π Fast Request Speed
- Speed up end-to-end API requests through image compression
- Deploying APIs in the same region as Bedrock provides lower latency
π± Fast Render Speed
- Using
useMemo
and custom caching to creates secondary cache for session content - Reduce unnecessary re-renders and speed up streaming messages display
- All UI components are rendered as native components
π¦ Fast Storage Speed
- By using react-native-mmkv Messages can be read, stored, and updated 10x faster than AsyncStorage
- Optimized session content and session list storage structure to accelerates history list display
- Encrypted API key storage
- Minimal permission requirements
- Local-only data storage
- No user behavior tracking
- No data collection
- Privacy-first approach
First, clone this repository. All app code is located in the react-native
folder. Before proceeding, execute the
following command to download dependencies.
cd react-native && npm i && npm start
open a new terminal and execute:
npm run android
Also open a new terminal. For the first time you need to install the native dependencies
by execute cd ios && pod install && cd ..
, then execute the follow command:
npm run ios
- Execute
npm start
. - Double click
ios/SwiftChat.xcworkspace
to open the project in your Xcode. - Change the build destination to
My Mac (Mac Catalyst)
then click the βΆ Run button.
Please refer API Reference
- Android and macOS: Navigate to Settings Page, if there is a new version, you will find it at the bottom of this page, then click the app version to download and install it.
- iOS: If a new version is released in the Release page, update your local code, rebuild and install your app by Xcode.
Note: After downloading a new version, please check the release notes to see if an API version update is required.
- For AppRunner: Click and open App Runner Services page,
find and open
swiftchat-api
, click top right Deploy button. - For Lambda: Click and open Lambda Services, find and open
your Lambda which start with
SwiftChatLambda-xxx
, click the Deploy new image button and click Save.
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.