diff --git a/.github/workflows/openclaw-plugin-publish.yml b/.github/workflows/openclaw-plugin-publish.yml
index 7c7ac971c..5c79e5ad5 100644
--- a/.github/workflows/openclaw-plugin-publish.yml
+++ b/.github/workflows/openclaw-plugin-publish.yml
@@ -25,7 +25,7 @@ jobs:
include:
- os: macos-14
platform: darwin-arm64
- - os: macos-13
+ - os: macos-15
platform: darwin-x64
- os: ubuntu-latest
platform: linux-x64
@@ -42,6 +42,11 @@ jobs:
- name: Install dependencies
run: npm install
+ - name: Rebuild for x64 under Rosetta (darwin-x64 only)
+ if: matrix.platform == 'darwin-x64'
+ run: |
+ arch -x86_64 npm rebuild better-sqlite3
+
- name: Collect prebuild
shell: bash
run: |
@@ -87,6 +92,13 @@ jobs:
- name: Install dependencies (skip native build)
run: npm install --ignore-scripts
+ - name: Generate telemetry credentials
+ run: node scripts/generate-telemetry-credentials.cjs
+ env:
+ MEMOS_ARMS_ENDPOINT: ${{ secrets.MEMOS_ARMS_ENDPOINT }}
+ MEMOS_ARMS_PID: ${{ secrets.MEMOS_ARMS_PID }}
+ MEMOS_ARMS_ENV: ${{ secrets.MEMOS_ARMS_ENV }}
+
- name: Bump version
run: npm version ${{ inputs.version }} --no-git-tag-version
diff --git a/apps/memos-local-openclaw/.env.example b/apps/memos-local-openclaw/.env.example
index 453efc02e..bfb409298 100644
--- a/apps/memos-local-openclaw/.env.example
+++ b/apps/memos-local-openclaw/.env.example
@@ -23,3 +23,10 @@ SUMMARIZER_TEMPERATURE=0
# No memory content, queries, or personal data is ever sent — only tool names, latencies, and version info.
# Enabled by default. Set to false to opt-out.
# TELEMETRY_ENABLED=false
+#
+# Telemetry backend credentials (for maintainers / CI only).
+# End users do NOT need to set these — they are bundled into the npm package at publish time.
+# If not set and telemetry.credentials.json is absent, telemetry is silently disabled.
+# MEMOS_ARMS_ENDPOINT=https://your-arms-endpoint.log.aliyuncs.com/rum/web/v2?workspace=...&service_id=...
+# MEMOS_ARMS_PID=your-arms-pid
+# MEMOS_ARMS_ENV=prod
diff --git a/apps/memos-local-openclaw/.gitignore b/apps/memos-local-openclaw/.gitignore
index 359334014..d0bfd2a76 100644
--- a/apps/memos-local-openclaw/.gitignore
+++ b/apps/memos-local-openclaw/.gitignore
@@ -3,6 +3,12 @@ dist/
*.tsbuildinfo
.env
+# Compiled output (root level only)
+/*.js
+/*.js.map
+/*.d.ts
+/*.d.ts.map
+
# OS files
.DS_Store
Thumbs.db
@@ -19,7 +25,11 @@ ppt/
# Prebuilt native binaries (included in npm package via `files`, not in git)
prebuilds/
+# Telemetry credentials (generated by CI, not committed to git)
+telemetry.credentials.json
+
# Database files
*.sqlite
*.sqlite-journal
*.db
+/~/.openclaw/
diff --git a/apps/memos-local-openclaw/HUB-SHARING-GUIDE.md b/apps/memos-local-openclaw/HUB-SHARING-GUIDE.md
new file mode 100644
index 000000000..d550cac1e
--- /dev/null
+++ b/apps/memos-local-openclaw/HUB-SHARING-GUIDE.md
@@ -0,0 +1,447 @@
+# Team Sharing Guide
+
+This guide explains how to use the v4 team sharing workflow in `memos-local-openclaw`: how to create a team server, join from another machine, get approved, search shared memory, share tasks, publish skills, and pull skills back to local use.
+
+## What v4 adds
+
+The plugin now supports a **Hub-Client** sharing model with comprehensive team management:
+
+- **Local memory stays local** unless you explicitly share it
+- **One Hub** stores team-shared tasks, memories, and skills
+- **Clients connect to the Hub** and submit join requests for admin approval
+- **Hub port auto-derivation** — Hub port is automatically derived from the gateway port (`gatewayPort + 11`), avoiding port conflicts in multi-instance setups
+- **Port retry** — If the derived/configured port is in use, the Hub retries up to 3 consecutive ports
+- **Admins approve users** before they can access team data; admins can promote, demote, and remove members
+- **Self-removal prevention** — Admins cannot accidentally remove themselves
+- **Notification system** — Role changes (promoted/demoted), resource events (shared/unshared/removed), and Hub status changes trigger real-time notifications
+- **Pending withdrawal** — Clients can cancel pending join requests when switching roles
+- **Leave team** — Clients can leave a team with confirmation; proper cleanup and Hub notification
+- **Graceful role transitions** — Switching between Hub/Client triggers confirmation prompts, connection cleanup, and restart
+- **Search scope** can be `local`, `group`, or `all`
+- **Shared skills** can be published to a group or to the whole team
+
+## Core concepts
+
+### Search scope
+
+- `local` — search only your local SQLite data
+- `group` — search local data + team data visible to your groups + public team data
+- `all` — same effective permissions as `group`; use it when you want “everything I am allowed to see”
+
+### Visibility
+
+There are two different “public” concepts in the plugin:
+
+- Local `owner="public"` memory — shared **inside the current local instance** across agents
+- Team `visibility="public"` — shared **to the whole team** through the server
+
+For team sharing, the important visibility values are:
+
+- `private` — local only
+- `group` — visible to members of a group on the team server
+- `public` — visible to all approved team members on the team server
+
+### What gets stored where
+
+- **Private memories and private skills** stay in your local SQLite database
+- **Shared tasks, shared chunks, and published skill bundles** are stored in the team server database
+- **Pulled skills** are restored back into your local `skills-store/` for offline use
+
+## Before you start
+
+You need:
+
+- OpenClaw installed and working
+- This plugin installed and enabled
+- OpenClaw built-in memory search disabled in `openclaw.json`
+- A machine that can stay online if you want to run the team server there
+
+## Deployment modes
+
+The plugin runs in one of two sharing roles:
+
+- `hub` — starts the team server and manages team membership
+- `client` — connects to an existing team server
+
+You enable sharing with:
+
+```jsonc
+{
+ "sharing": {
+ "enabled": true,
+ "role": "hub"
+ }
+}
+```
+
+or:
+
+```jsonc
+{
+ "sharing": {
+ "enabled": true,
+ "role": "client"
+ }
+}
+```
+
+## Option A: Create a team server
+
+Use this on the first machine in the team.
+
+### Example config
+
+```jsonc
+{
+ "plugins": {
+ "entries": {
+ "memos-local-openclaw-plugin": {
+ "enabled": true,
+ "config": {
+ "sharing": {
+ "enabled": true,
+ "role": "hub",
+ "hub": {
+ "teamName": "My Team",
+ "teamToken": "${MEMOS_TEAM_TOKEN}"
+ // port is auto-derived from gateway port; set explicitly only if needed
+ }
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+### What happens
+
+- The plugin starts the team server
+- The first bootstrap user becomes the team admin
+- The server stores user approvals, groups, shared tasks, shared chunks, and shared skills
+- The Viewer shows admin controls in **Settings → Team Sharing**
+
+### Admin responsibilities
+
+The team admin can:
+
+- approve or reject join requests from the pending users panel
+- promote members to admin or demote admins to regular members (affected users receive notifications)
+- remove team members (with confirmation prompt; self-removal is prevented)
+- view team overview: team name, total members, active members
+- see team connection and server information
+- manage who can access shared team data
+- when disabling sharing (shutting down Hub), all connected clients receive a `hub_shutdown` notification
+
+## Option B: Join an existing team server
+
+Use this on every other machine.
+
+### Example config
+
+```jsonc
+{
+ "plugins": {
+ "entries": {
+ "memos-local-openclaw-plugin": {
+ "enabled": true,
+ "config": {
+ "sharing": {
+ "enabled": true,
+ "role": "client",
+ "client": {
+ "hubAddress": "192.168.1.100:18800",
+ "userToken": "${MEMOS_USER_TOKEN}"
+ }
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+### Important note about joining
+
+In the current implementation, the **join and approval flow exists at the server API level and Viewer admin UI**, but the client still needs a valid `userToken` to operate as a connected client.
+
+That means the usual rollout is:
+
+1. Admin starts the server
+2. Admin creates or issues a user token for the client environment
+3. Client configures `hubAddress` + `userToken`
+4. Viewer shows the connected team / role / groups
+
+If you are testing the raw server API directly, the public join endpoint is:
+
+- `POST /api/v1/hub/join`
+
+Admin review endpoints are:
+
+- `GET /api/v1/hub/admin/pending-users`
+- `POST /api/v1/hub/admin/approve-user`
+- `POST /api/v1/hub/admin/reject-user`
+
+## Viewer walkthrough
+
+### 1. Team Sharing panel
+
+Open the Memory Viewer and go to **Settings**.
+
+The **Team Sharing** panel shows:
+
+- whether sharing is enabled
+- whether you are in `hub` (server) or `client` mode
+- which server you are connected to
+- your team, username, role, and groups
+- pending user approvals if you are an admin
+
+### 2. Memory search with team scope
+
+In the **Memories** tab, the search scope selector now supports:
+
+- `Local`
+- `Group`
+- `All`
+
+Result behavior:
+
+- local hits stay in the normal local result format
+- team hits are shown in a separate section
+- team hits include extra context like owner and group
+- use the **View Detail** action on a team result to fetch full shared memory content
+
+### 3. Task sharing
+
+In the **Tasks** tab:
+
+- open a task detail panel
+- use the share controls in the header
+- choose `Share to Group` or `Share to Public`
+- click **Share**
+- click **Unshare** to remove the task from the team
+
+What gets shared:
+
+- task metadata
+- all current task chunks
+- later chunks can be pushed by the client flow as the task continues evolving
+
+### 4. Skill search and pull
+
+In the **Skills** tab:
+
+- search locally as usual
+- switch scope to `Group` or `All`
+- local and team skill results are shown separately
+- click **Pull to Local** on a team skill to restore the bundle locally
+
+Pulled skills are written into your local skills store, so they remain usable even if the server is offline later.
+
+## Agent tools for team sharing
+
+These are the main tools you will use with v4 sharing:
+
+### Memory tools
+
+- `memory_search(query, scope)`
+ - `scope: "local" | "group" | "all"`
+ - returns local results and, for shared scopes, a separate team result section
+- `network_memory_detail(remoteHitId)`
+ - fetches the full content for a team memory hit
+- `memory_get` / `memory_timeline`
+ - still work for **local** hits only
+
+### Task sharing tools
+
+- `task_share(taskId, visibility)`
+ - shares a task to the team
+- `task_unshare(taskId)`
+ - removes that task from the team
+
+### Skill tools
+
+- `skill_search(query, scope)`
+ - supports `local`, `group`, and `all`
+- `skill_publish(skillId, scope)`
+ - publish a skill to `public` or a group-facing scope supported by your current flow
+- `skill_unpublish(skillId)`
+ - remove a previously published skill from team sharing
+- `network_skill_pull(skillId)`
+ - pull a team skill bundle into local storage
+- `skill_get` / `skill_install`
+ - continue to work for local skills
+
+### Team info
+
+- `network_team_info()`
+ - shows team server URL, connected user, role, team, and groups
+
+## End-to-end example workflow
+
+### Example 1: Search team memory
+
+1. Open a new task in OpenClaw
+2. Ask the agent to search with `scope: "group"`
+3. Review local results first
+4. Review team hits with owner/group context
+5. Use `network_memory_detail` if a team hit looks relevant
+
+### Example 2: Share a task
+
+1. Finish a useful task locally
+2. Open the task in Viewer
+3. Choose `Share to Group` or `Share to Public`
+4. Click **Share**
+5. Teammates searching with `group` or `all` can now discover it
+
+### Example 3: Publish and pull a skill
+
+1. Publish a polished local skill with `skill_publish`
+2. Another teammate searches team skills with `scope: "all"`
+3. They click **Pull to Local** in Viewer, or call `network_skill_pull`
+4. The bundle is restored locally for reuse
+
+## What happens if the server is down
+
+The intended behavior is graceful degradation:
+
+- local memory still works
+- local skills still work
+- Viewer still opens
+- shared searches fall back to local results with an empty team section
+- share / pull actions fail cleanly until the server is back
+
+## Model configuration and current fallback behavior
+
+### Embedding
+
+You can configure a provider explicitly, such as:
+
+- `openai_compatible`
+- `gemini`
+- `cohere`
+- `voyage`
+- `mistral`
+- `local`
+
+### Summarizer / skill models
+
+You can configure:
+
+- `openai_compatible`
+- `anthropic`
+- `gemini`
+- `bedrock`
+
+### Current sidecar-build fallback behavior
+
+In the current plugin build:
+
+- if embedding is not configured, the plugin falls back to the **local embedding model**
+- if summarizer is not configured, the plugin falls back to the **rule-based summarizer**
+- if a configured remote provider fails, the plugin falls back to the local/rule-based path where supported
+- `openclaw` host-backed providers are defined in types, but are **not available in this sidecar build unless host capabilities are explicitly supported**
+
+If you want predictable production behavior today, configure your embedding and summarizer providers explicitly.
+
+## Multi-Instance Deployment
+
+When running multiple OpenClaw instances on the same machine (e.g., personal + work):
+
+### Port isolation
+
+- **Viewer port**: Auto-derived per instance — no conflicts
+- **Hub port**: Auto-derived as `gatewayPort + 11` (e.g., gateway `18789` → Hub `18800`, gateway `19001` → Hub `19012`)
+- **Port retry**: If the auto-derived port is in use, the Hub tries up to 3 consecutive ports automatically
+
+### Session isolation
+
+Each Viewer instance uses a unique cookie name based on its port, so you can be logged into multiple Viewer instances simultaneously in the same browser.
+
+### Database isolation
+
+Each OpenClaw instance uses its own state directory and database. Configure via `OPENCLAW_STATE_DIR` environment variable or `--state-dir` flag.
+
+### Example: Dual-instance setup
+
+```bash
+# Instance 1 (Personal): gateway on 18789, viewer on 18799, hub on 18800
+OPENCLAW_CONFIG_PATH=~/oc-personal/openclaw.json openclaw gateway start
+
+# Instance 2 (Work): gateway on 19001, viewer on 19011, hub on 19012
+OPENCLAW_CONFIG_PATH=~/oc-work/openclaw.json openclaw gateway start
+```
+
+## Notifications
+
+The system sends real-time notifications for these events:
+
+| Event | Recipient | Message |
+|---|---|---|
+| Role promoted (to admin) | The promoted user | "You have been promoted to admin" |
+| Role demoted (to member) | The demoted user | "You have been changed to regular member" |
+| Resource shared | Team members | Resource name with sharing action |
+| Resource unshared | Team members | Resource name with unsharing action |
+| Resource removed | Resource owner | Resource name with removal action |
+| Hub shutdown | All connected clients | Hub has been shut down |
+| Member joined | Admin | New member has joined the team |
+| Member left | Admin | Member has left the team |
+
+## Troubleshooting
+
+### Viewer says sharing is disabled
+
+Check:
+
+- `sharing.enabled` is `true`
+- `sharing.role` is set correctly
+- the gateway was restarted after config changes
+
+### Viewer says client configured but disconnected
+
+Check:
+
+- the team server is running
+- `sharing.client.hubAddress` is correct
+- `sharing.client.userToken` is valid
+- the server machine is reachable on the configured port
+
+### I can see local results but no team results
+
+Check:
+
+- you selected `Group` or `All`, not `Local`
+- the server actually contains shared tasks or skills
+- your user belongs to the required group
+- the admin approved your access
+
+### Shared memory detail fails
+
+Usually this means one of the following:
+
+- the hit was generated from another user token/session context
+- the server-side `remoteHitId` expired
+- the server is unavailable
+
+Run a fresh search and try the detail action again.
+
+### Skill pull fails
+
+Check:
+
+- the team skill still exists
+- your token is valid
+- the bundle passes local safety validation
+- your local skills directory is writable
+
+## Recommended rollout for a small team
+
+1. Pick one stable machine as the server host
+2. Enable `sharing.role = "hub"` there
+3. Confirm the admin can open Viewer and see team status
+4. Configure one client machine with `hubAddress` + `userToken`
+5. Verify `network_team_info()` works
+6. Share one task, search it from another machine, and open team memory detail
+7. Publish one skill and pull it from another machine
+8. Only then roll the config out to the rest of the team
diff --git a/apps/memos-local-openclaw/README.md b/apps/memos-local-openclaw/README.md
index c7dedc327..f2a9df64f 100644
--- a/apps/memos-local-openclaw/README.md
+++ b/apps/memos-local-openclaw/README.md
@@ -5,11 +5,11 @@
[](https://nodejs.org/)
[](https://github.com/MemTensor/MemOS/tree/main/apps/memos-local-openclaw)
-Persistent local conversation memory for [OpenClaw](https://github.com/nicepkg/openclaw) AI Agents. Every conversation is automatically captured, semantically indexed, and instantly recallable — with **task summarization & skill evolution**, and **multi-agent collaborative memory**.
+Persistent local conversation memory for [OpenClaw](https://github.com/nicepkg/openclaw) AI Agents. Every conversation is automatically captured, semantically indexed, and instantly recallable — with **task summarization & skill evolution**, **team sharing for memories and skills**, and **multi-agent collaborative memory**.
-**Full-write | Hybrid Search | Task Summarization & Skill Evolution | Multi-Agent Collaboration | Memory Viewer**
+**Full-write | Hybrid Search | Task Summarization & Skill Evolution | Team Sharing | Memory Viewer**
-> **Homepage:** 🌐 [Homepage](https://memos-claw.openmem.net) · 📖 [Documentation](https://memos-claw.openmem.net/docs/index.html) · 📦 [NPM](https://www.npmjs.com/package/@memtensor/memos-local-openclaw-plugin)
+> 🌐 [Homepage](https://memos-claw.openmem.net) · 📖 [Documentation](https://memos-claw.openmem.net/docs/) · 📦 [NPM](https://www.npmjs.com/package/@memtensor/memos-local-openclaw-plugin) · 🛠 [Troubleshooting](https://memos-claw.openmem.net/docs/troubleshooting.html)
## Why MemOS
@@ -48,13 +48,17 @@ Persistent local conversation memory for [OpenClaw](https://github.com/nicepkg/o
- **Dedicated model** — Optional separate LLM model for skill generation (e.g., Claude 4.6 for higher quality)
- **LLM fallback chain** — `skillSummarizer` → `summarizer` → OpenClaw native model (auto-detected from `openclaw.json`). If all configured models fail, the next in chain is tried automatically
-### Multi-Agent Collaboration
-- **Memory isolation** — Each agent's memories are tagged with `owner`. During search, agents only see their own private memories and explicitly shared `public` memories
-- **Public memory** — `memory_write_public` tool allows agents to write shared knowledge accessible to all agents (e.g., team decisions, conventions, shared configs)
-- **Skill sharing** — Skills have a `visibility` toggle (`private`/`public`). Public skills are discoverable by all agents via `skill_search`
-- **Skill discovery** — `skill_search` combines FTS (name + description) and vector search (description embedding) with RRF fusion, followed by LLM relevance judgment. Supports `scope` parameter: `mix` (default), `self`, or `public`
-- **Publish/unpublish** — `skill_publish` / `skill_unpublish` tools toggle skill visibility. Other agents can search, preview, and install public skills
-- **Agent-aware capture** — `agent_end` event extracts `agentId` to tag all captured messages with the correct owner
+### Team Sharing (v4)
+- **Hub-Client architecture** — One Hub stores shared data; clients keep private data local and query the Hub on demand. Roles can be switched dynamically with proper confirmation and cleanup
+- **Hub port auto-derivation** — Hub port derived from gateway port (`gatewayPort + 11`) to avoid conflicts in multi-instance setups; automatic port retry on `EADDRINUSE`
+- **Admin approval flow** — Join requests require admin approval; admins can promote, demote, and remove members (with self-removal prevention)
+- **Notification system** — Role change notifications (promoted/demoted), resource sharing notifications (shared/unshared/removed) with localized messages, Hub shutdown alerts
+- **Scoped retrieval** — `memory_search` and `skill_search` support `local`, `group`, and `all` search scopes
+- **Task sharing** — `task_share` / `task_unshare` push or remove task memories from the team without changing local private storage
+- **Skill publish/pull** — Skills can be published to team visibility scopes and pulled back locally as full bundles for offline reuse
+- **Graceful state transitions** — Client-to-Hub switch triggers confirmation, pending request withdrawal, connection cleanup, and automatic restart
+- **Multi-instance support** — Viewer port, Hub port, sessions, and databases are all isolated per instance; supports running multiple OpenClaw instances on the same machine
+- **Viewer integration** — Full team management UI: connection state, member management, pending approvals, scoped search, task share controls, skill pull, notification feed, and setup guide
### Memory Migration — Reconnect 🦐
- **One-click import** — Seamlessly migrate OpenClaw's native built-in memories (SQLite + JSONL) into the MemOS intelligent memory system
@@ -87,35 +91,29 @@ Persistent local conversation memory for [OpenClaw](https://github.com/nicepkg/o
### 1. Install
-**Step 0 — Prepare build environment (macOS / Linux):**
+One command installs the plugin, all dependencies, and build tools automatically. Supports auto-upgrade to the latest version.
-This plugin uses `better-sqlite3`, a native C/C++ module. On **macOS** and **Linux**, prebuilt binaries may not be available, so **install C++ build tools first** to ensure a smooth installation:
+**macOS / Linux:**
```bash
-# macOS
-xcode-select --install
-
-# Linux (Ubuntu / Debian)
-sudo apt install build-essential python3
+curl -fsSL https://cdn.memtensor.com.cn/memos-local-openclaw/install.sh | bash
```
-> **Windows users:** `better-sqlite3` ships prebuilt binaries for Windows + Node.js LTS, so you can usually skip this step and go directly to Step 1. If installation still fails, install [Visual Studio Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) (select "C++ build tools" workload).
->
-> Already have build tools? Skip to Step 1. Not sure? Run the install command above — it's safe to re-run.
->
-> **Still having issues?** See the [Troubleshooting](#troubleshooting) section, the [detailed troubleshooting guide](https://memtensor.github.io/MemOS/apps/memos-local-openclaw/docs/troubleshooting.html), or the [official better-sqlite3 troubleshooting docs](https://github.com/WiseLibs/better-sqlite3/blob/master/docs/troubleshooting.md).
+**Windows (PowerShell):**
+
+```powershell
+powershell -c "irm https://cdn.memtensor.com.cn/memos-local-openclaw/install.ps1 | iex"
+```
-**Step 1 — Install the plugin:**
+**Alternative — Install via OpenClaw CLI:**
```bash
openclaw plugins install @memtensor/memos-local-openclaw-plugin
```
-The plugin is installed under `~/.openclaw/extensions/memos-local-openclaw-plugin` and registered as `memos-local-openclaw-plugin`. Dependencies and `better-sqlite3` native module are built automatically during installation.
-
> **Note:** The Memory Viewer starts only when the **OpenClaw gateway** is running. After install, **configure** `openclaw.json` (step 2) and **start the gateway** (step 3); the viewer will then be available at `http://127.0.0.1:18799`.
>
-> **Installation failed?** If `better-sqlite3` compilation fails during install, manually rebuild after ensuring build tools are installed:
+> **Installation failed?** See the [Troubleshooting](#troubleshooting) section, the [detailed troubleshooting guide](https://memos-claw.openmem.net/docs/troubleshooting.html), or the [official better-sqlite3 troubleshooting docs](https://github.com/WiseLibs/better-sqlite3/blob/master/docs/troubleshooting.md). You can also try manually rebuilding the native module:
> ```bash
> cd ~/.openclaw/extensions/memos-local-openclaw-plugin && npm rebuild better-sqlite3
> ```
@@ -184,7 +182,7 @@ Add the plugin config to `~/.openclaw/openclaw.json`:
| Mistral | `mistral` | `mistral-embed` | |
| Local (offline) | `local` | — | Uses `Xenova/all-MiniLM-L6-v2`, no API needed |
-> **No embedding config?** The plugin falls back to the local model automatically. You can start with zero configuration and add a cloud provider later for better quality.
+> **No embedding config?** In the current sidecar build, the plugin falls back to the local embedding model automatically. If you need deterministic team-wide behavior, configure an explicit provider.
#### Summarizer Provider Options
@@ -259,6 +257,87 @@ memos-local: started (embedding: openai_compatible)
╚══════════════════════════════════════════╝
```
+## Team Sharing (v4)
+
+Team Sharing turns multiple OpenClaw instances into a collaborative memory network. One instance serves as the **Hub** (team server), others connect as **Clients**. Private data stays local; only explicitly shared tasks, memories, and skills are visible to the team.
+
+### Key Capabilities
+
+| Capability | Description |
+|---|---|
+| **Hub / Client architecture** | One Hub stores shared data; clients keep private data local and query the Hub on demand |
+| **Hub port auto-derivation** | Hub port is automatically derived from the gateway port (`gatewayPort + 11`), avoiding port conflicts in multi-instance setups. Explicit `hub.port` config overrides this. |
+| **Port retry on conflict** | If the derived/configured Hub port is in use (`EADDRINUSE`), the server automatically retries up to 3 consecutive ports |
+| **Admin approval flow** | New members submit join requests; admin approves/rejects from the Viewer |
+| **Self-removal prevention** | Admins cannot accidentally remove themselves from the team |
+| **Role change notifications** | When an admin promotes/demotes a member, the affected user receives a notification |
+| **Resource notifications** | Shared/unshared/removed resources trigger localized notifications with resource names |
+| **Pending withdrawal** | Clients can cancel pending join requests when switching roles or disabling sharing |
+| **Graceful role transitions** | Switching from Client to Hub (or vice versa) triggers confirmation prompts, proper cleanup of remote connections, and restart |
+| **Hub shutdown notification** | When a Hub owner disables sharing, all connected clients receive a `hub_shutdown` notification |
+| **Leave team** | Clients can leave a team with a confirmation dialog; the Hub is notified and the client's data is cleaned up |
+| **Scoped retrieval** | `memory_search` and `skill_search` support `local`, `group`, and `all` search scopes |
+| **Task sharing** | Push/remove task memories to/from the team |
+| **Skill publish/pull** | Publish skills to team visibility; pull team skills locally as full bundles for offline use |
+
+### Quick Setup
+
+**Option A — Start a Hub (team server):**
+
+```jsonc
+{
+ "config": {
+ "sharing": {
+ "enabled": true,
+ "role": "hub",
+ "hub": {
+ "teamName": "My Team",
+ "teamToken": "${MEMOS_TEAM_TOKEN}"
+ // port is auto-derived; set explicitly only if needed
+ }
+ }
+ }
+}
+```
+
+**Option B — Join as Client:**
+
+```jsonc
+{
+ "config": {
+ "sharing": {
+ "enabled": true,
+ "role": "client",
+ "client": {
+ "hubAddress": "192.168.1.100:18800"
+ }
+ }
+ }
+}
+```
+
+You can also configure sharing entirely through the **Viewer → Settings → Team Sharing** panel — no need to edit `openclaw.json` manually.
+
+### Multi-Instance Deployment
+
+When running multiple OpenClaw instances on the same machine (e.g., personal + work):
+
+- **Viewer port**: Each instance derives its Viewer port from the gateway port, so they won't conflict
+- **Hub port**: Auto-derived as `gatewayPort + 11` (e.g., gateway `18789` → Hub `18800`, gateway `19001` → Hub `19012`)
+- **Session isolation**: Each instance uses a separate cookie name based on its Viewer port, so multiple Viewers can be logged in simultaneously
+- **Database isolation**: Each instance uses its own `memos.db` under its respective state directory
+
+### Viewer Team Sharing Panel
+
+The **Settings → Team Sharing** panel provides a complete management interface:
+
+- **Hub mode**: Team name, member count, active members, pending approvals, admin controls (approve/reject/promote/demote/remove)
+- **Client mode**: Connection status, team info, leave team button, notification feed
+- **Setup guide cards**: Always visible — choose "Host a Team" or "Join a Team" with step-by-step instructions
+- **Real-time notifications**: Role changes, resource sharing events, Hub status changes
+
+For the full end-user workflow, see [`HUB-SHARING-GUIDE.md`](./HUB-SHARING-GUIDE.md).
+
### 5. Verify Memory is Working
**Step A** — Have a conversation with your OpenClaw agent about anything.
@@ -361,21 +440,26 @@ Query → FTS5 + Vector dual recall → RRF Fusion → MMR Rerank
## Agent Tools
-The plugin provides **12 smart tools** (11 registered tools + auto-recall) and auto-installs the **memos-memory-guide** skill:
+The plugin provides local memory tools plus v4 team-sharing tools, and auto-installs the **memos-memory-guide** skill:
| Tool | Purpose | When to Use |
|------|---------|-------------|
| `auto_recall` | Automatically injects relevant memories into agent context each turn (via `before_agent_start` hook) | Runs automatically — no manual call needed |
-| `memory_search` | Search memories (auto-filtered to current agent + public); returns excerpts + `chunkId` / `task_id` | When auto-recall returned nothing or you need a different query |
-| `memory_get` | Get full original text of a memory chunk | When you need to verify exact details from a search hit |
-| `memory_timeline` | Surrounding conversation around a chunk | When you need the exact dialogue before/after a hit |
-| `memory_write_public` | Write a memory to the shared public space (owner="public") | When the agent discovers knowledge all agents should access |
-| `task_summary` | Full structured summary of a completed task | When a hit has `task_id` and you need the full story (goal, steps, result) |
-| `skill_get` | Get skill content by `skillId` or `taskId` | When a hit has a linked task/skill and you want the reusable experience guide |
+| `memory_search` | Search memories with `scope: local | group | all`; team hits are returned separately from local hits | When auto-recall returned nothing or you need local + shared context |
+| `memory_get` | Get full original text of a local memory chunk | When you need to verify exact details from a local search hit |
+| `memory_timeline` | Surrounding conversation around a local chunk | When you need the exact dialogue before/after a local hit |
+| `network_memory_detail` | Fetch full content for a team memory hit | When a shared search hit looks relevant and you need full detail |
+| `memory_write_public` | Write a memory to the local shared public space (`owner="public"`) | When the agent discovers knowledge all local agents should access |
+| `task_summary` | Full structured summary of a completed task | When a hit has `task_id` and you need the full story |
+| `task_share` | Push a local task and its memories to the team | When a task should be searchable by your group or the whole team |
+| `task_unshare` | Remove a shared task from the team | When a task should stop being shared |
+| `skill_get` | Get local skill content by `skillId` or `taskId` | When a hit has a linked task/skill and you want the reusable guide |
| `skill_install` | Install a skill into the agent workspace | When the skill should be permanently available for future turns |
-| `skill_search` | Search skills via FTS + vector + LLM relevance; scope: `mix` / `self` / `public` | When an agent needs to discover existing skills for a task |
-| `skill_publish` | Set a skill's visibility to public | When a skill should be discoverable by other agents |
-| `skill_unpublish` | Set a skill's visibility back to private | When a skill should no longer be shared |
+| `skill_search` | Search skills with `scope: local | group | all` | When an agent needs to discover local or team-shared skills |
+| `skill_publish` | Publish a skill to team sharing or local public visibility, depending on scope | When a skill should be shared with teammates |
+| `skill_unpublish` | Make a previously shared skill private again | When a skill should no longer be shared |
+| `network_skill_pull` | Pull a team skill bundle into local storage | When a teammate's shared skill should be usable locally/offline |
+| `network_team_info` | Show current team server URL, user, role, and groups | When you need to inspect current team connection state |
| `memory_viewer` | Get the URL of the Memory Viewer web UI | When the user asks where to view or manage their memories |
### Search Parameters
@@ -403,7 +487,7 @@ Open `http://127.0.0.1:18799` in your browser after starting the gateway.
| **Analytics** | Daily write/read activity charts, memory/task/skill totals, role breakdown |
| **Logs** | Tool call log (memory_search, auto_recall, memory_add, etc.) with input/output, duration, and tool filter; auto-refresh |
| **Import** | 🦐 OpenClaw native memory migration — scan, one-click import with real-time SSE progress, smart dedup, pause/resume; post-processing for task & skill generation |
-| **Settings** | Online configuration for embedding model, summarizer model, skill evolution settings, viewer port |
+| **Settings** | Online configuration plus **Team Sharing** status, current role, team/groups, and admin pending-user actions |
**Viewer won't open?**
diff --git a/apps/memos-local-openclaw/demo-guide.html b/apps/memos-local-openclaw/demo-guide.html
new file mode 100644
index 000000000..370e241f3
--- /dev/null
+++ b/apps/memos-local-openclaw/demo-guide.html
@@ -0,0 +1,720 @@
+
+
+
+ 为 OpenClaw 提供完全本地化的持久记忆、智能任务总结、技能自动进化和多智能体协同。npm 一键安装,支持分级模型配置。
+ Fully local persistent memory, smart task summarization, auto skill evolution, and multi-agent collaboration for OpenClaw. One-command install, tiered model support.
+
+
+ 完全本地化:数据存于本机 SQLite,零云依赖。Viewer 仅 127.0.0.1,密码保护。
+ Fully local: Data in local SQLite, zero cloud dependency. Viewer 127.0.0.1 only, password-protected.
+
+
+
+
💾
全量写入Full-Write
每次对话自动捕获,语义分片后持久化。Auto-captures every conversation, chunks semantically.
+
⚡
任务总结与技能进化Tasks & Skills
碎片对话归纳为结构化任务,再提炼为可复用技能并持续升级。Conversations organized into tasks, then distilled into skills that auto-upgrade.
Embedding/摘要/技能可独立配置不同模型。Each pipeline configurable with different models.
+
🤝
多智能体协同Multi-Agent
记忆隔离 + 公共记忆 + 技能共享,多 Agent 协同进化。Memory isolation + public memory + skill sharing for collective evolution.
+
🦐
原生记忆导入Native Memory Import
一键迁移 OpenClaw 内置记忆,智能去重、断点续传、实时进度。One-click migration from OpenClaw built-in memories with smart dedup, resume, and real-time progress.
+
👥
团队共享中心Team Sharing Hub
Hub-Client 架构,跨实例共享记忆/任务/技能。审批流程、角色管理、实时通知、端口自动推导。Hub-Client architecture for cross-instance sharing. Approval flow, role management, real-time notifications, auto port derivation.
+
🔗
LLM 智能降级LLM Fallback Chain
技能模型 → 摘要模型 → OpenClaw 原生模型三级自动降级,零手动干预。Skill model → summarizer → OpenClaw native model, auto-fallback with zero manual intervention.
+
✏️
任务/技能 CRUDTask & Skill CRUD
列表卡片直接编辑、删除、重试技能生成、切换可见性。Edit, delete, retry skill gen, toggle visibility — all from list cards.
+
+
+
+
+
系统架构Architecture
+
四条流水线:记忆写入 → 任务总结与技能进化(异步)→ 智能检索 → 协同共享。每个 Agent 拥有独立记忆空间,通过公共记忆和技能共享实现协同进化。Four pipelines: write → task & skill evolution (async) → retrieval → collaboration. Each agent has isolated memory; public memory and skill sharing enable collective evolution.
每轮自动:before_agent_start 用用户消息检索 → LLM 过滤相关 → 注入 system 上下文;无结果时提示 agent 自生成 query 调 memory_search。Per turn: before_agent_start searches with user message → LLM filters relevant → inject system context; if no hits, hint agent to call memory_search with self-generated query.
Embedding / Summarizer API 可选,不配自动用本地模型Embedding / Summarizer APIs optional, falls back to local
+
+
+
Step 0:安装 C++ 编译工具(macOS / Linux 推荐)Step 0: Install C++ Build Tools (macOS / Linux recommended)
+
插件依赖 better-sqlite3 原生模块。macOS / Linux 用户建议先安装编译工具,可大幅提升安装成功率。Windows 用户使用 Node.js LTS 版本时通常有预编译文件,可直接跳到 Step 1。The plugin depends on better-sqlite3, a native C/C++ module. macOS / Linux users should install build tools first. Windows users with Node.js LTS usually have prebuilt binaries and can skip to Step 1.
安装失败?最常见的问题是 better-sqlite3 原生模块编译失败。请确认已执行上方 Step 0,然后手动重建:cd ~/.openclaw/extensions/memos-local-openclaw-plugin && npm rebuild better-sqlite3。更多方案请查看 安装排查指南 或 better-sqlite3 官方文档。Install failed? The most common issue is better-sqlite3 compilation failure. Ensure Step 0 is done, then manually rebuild: cd ~/.openclaw/extensions/memos-local-openclaw-plugin && npm rebuild better-sqlite3. See the troubleshooting guide or official better-sqlite3 docs for more solutions.
升级自动完成依赖安装、旧版清理和原生模块编译,无需手动操作。如果 update 命令不可用,先删除旧目录再重新安装:rm -rf ~/.openclaw/extensions/memos-local-openclaw-plugin && openclaw plugins install @memtensor/memos-local-openclaw-plugin(记忆数据不受影响)。Upgrade automatically handles dependencies, legacy cleanup, and native module compilation. If update is unavailable, delete the old directory first: rm -rf ~/.openclaw/extensions/memos-local-openclaw-plugin && openclaw plugins install @memtensor/memos-local-openclaw-plugin (memory data is stored separately and won't be affected).
+
+
配置Configuration
+
两种方式:编辑 openclaw.json 或通过 Viewer 网页面板在线修改。支持分级模型。Two methods: edit openclaw.json or via Viewer web panel. Tiered models supported.
安装后每次对话自动存入记忆。访问 http://127.0.0.1:18799 使用 Viewer。Every conversation auto-stored. Visit http://127.0.0.1:18799 for Viewer.
+
+
+
+
🦐 记忆迁移 — 再续前缘🦐 Memory Migration — Reconnect
+
将 OpenClaw 原生内置的记忆数据(SQLite 存储的对话历史)无缝迁移到 MemOS 的智能记忆系统。你和 AI 共同积累的每一段对话,都值得被记住。Seamlessly migrate OpenClaw's native built-in memory data (SQLite conversation history) to MemOS's intelligent memory system. Every conversation you've built with AI deserves to be remembered.
技能进化:从已完成的任务中提炼可复用技能,生成 SKILL.md 文件并安装到工作区。Skill evolution: Distill reusable skills from completed tasks, generate SKILL.md and install to workspace.
+
+
后处理在同一 Agent 内串行执行,不同 Agent 之间可并行(并发度可配置 1–8)。已处理过的会话自动跳过。支持选择只生成任务、只生成技能或两者同时执行。Post-processing runs serially within each agent, with parallel processing across agents (configurable concurrency 1–8). Already processed sessions are auto-skipped. Choose task-only, skill-only, or both.
+
+
断点续传Resume & Stop
+
导入和后处理均支持随时暂停:Both import and post-processing support pause/resume:
+
+
点击 停止 按钮后,进度自动保存。Click Stop, progress auto-saved.
+
刷新页面后自动检测未完成的导入,恢复进度条显示。On page refresh, auto-detect incomplete imports and restore progress display.
+
再次点击开始即从上次中断处继续,已处理的记忆自动跳过。Click start again to continue from where you left off — processed memories are auto-skipped.
+
导入和后处理在后台运行,关闭 Viewer 页面不影响执行。Import and post-processing run in the background — closing the Viewer page won't interrupt them.
+
+
+
🦐 来源标识:所有通过迁移导入的记忆都带有 🦐 标识,在 Viewer 的记忆列表中可一眼区分原生导入和对话生成的记忆。🦐 Source Tag: All migrated memories are tagged with 🦐, making them visually distinguishable from conversation-generated memories in the Viewer.
skill_publish 将技能设为公开,其他 Agent 可通过 skill_search 发现并安装。skill_unpublish 设为私有。skill_publish makes a skill public and discoverable via skill_search. skill_unpublish sets it private.skillId (required).
Team Sharing 将多个 OpenClaw 实例连接为协作网络。一个实例作为 Hub(团队服务端),其他实例作为 Client 连接。私有数据始终留在本地,仅明确共享的任务、记忆和技能对团队可见。Team Sharing connects multiple OpenClaw instances into a collaborative network. One instance serves as the Hub (team server) while others connect as Clients. Private data stays local — only explicitly shared tasks, memories, and skills are visible to the team.
MemOS 原生支持多 Agent 场景。每个 Agent 的记忆和任务通过 owner 字段隔离(格式 agent:{agentId}),检索时自动过滤为当前 Agent + public。MemOS natively supports multi-agent scenarios. Each agent's memories and tasks are isolated via an owner field (agent:{agentId}); retrieval automatically filters to current agent + public.
+
+
记忆隔离:Agent A 无法检索 Agent B 的私有记忆Memory Isolation: Agent A cannot retrieve Agent B's private memories
+
公共记忆:通过 memory_write_public 写入 owner="public" 的记忆,所有 Agent 可检索Public Memory: Use memory_write_public to write owner="public" memories discoverable by all agents
+
技能共享:通过 skill_publish 将技能设为公开,其他 Agent 可通过 skill_search 发现并安装Skill Sharing: Use skill_publish to make skills public; other agents discover and install via skill_search