feat: add spacetimedb json migration tooling
Some checks failed
CI / verify (push) Has been cancelled

This commit is contained in:
2026-04-27 14:54:26 +08:00
parent ded6f6ee2a
commit 9a79494c68
13 changed files with 1532 additions and 2 deletions

View File

@@ -0,0 +1,196 @@
# SpacetimeDB JSON 字符串迁移 procedure 设计
## 背景
`spacetime sql` 只能稳定读取 public 表或数据库 owner 可见表。当前 `ai_result_reference` 等运行真相表保持 private直接 SQL 导出会遇到 `no such table` 或 private table 提示,不能作为跨服务器迁移的稳定方案。
SpacetimeDB reducer 必须保持确定性不能访问文件系统和网络。procedure 可以返回数据,也可以在事务中读取 private 表,因此迁移改为:
1. `spacetime-module` 内的导出 procedure 读取迁移白名单表,并直接返回迁移 JSON 字符串。
2. Node 运维脚本默认通过 `spacetime call` 调用导出 procedure把返回的 JSON 字符串写入本地文件。
3. Node 运维脚本读取本地 JSON 文件内容,并通过 HTTP request body 作为字符串参数传给导入 procedure。
4. 导入 procedure 校验 JSON 与表白名单后,在事务中写入目标数据库。
procedure 不再访问 HTTP 文件桥,也不接收部署机本地文件路径。这样可以避开 SpacetimeDB 对 private/special-purpose 地址的 HTTP 访问限制,并避免把 private 表内容通过临时 HTTP 服务转发。
`spacetime login show --token` 输出的是 CLI 登录 token不是 HTTP `/v1/database/.../call` 所需的数据库连接 token。运维脚本默认走 CLI 登录态,迁移时不要把 CLI token 传给 `--token`;只有显式传 `--use-http` 时才需要数据库连接 token。
## 接口
### 迁移操作员授权
迁移 procedure 会读取并写入 private 表,不能对任意登录身份开放。模块内新增私有表 `database_migration_operator` 作为迁移操作员白名单:
- `operator_identity`: 被授权调用迁移 procedure 的 SpacetimeDB identity。
- `created_at`: 授权写入时间。
- `created_by`: 发起授权的 identity。
- `note`: 运维备注,只用于区分来源、环境或临时用途。
`database_migration_operator` 只控制迁移 procedure 调用权限,不会被导出或导入,避免把源库的运维权限复制到目标库。
首次授权时,操作员表为空,必须通过编译进模块的 `GENARRATIVE_SPACETIME_MIGRATION_BOOTSTRAP_SECRET` 引导密钥授权第一位操作员。发布脚本会在构建或发布 SpacetimeDB 模块时自动生成一份强随机引导密钥、注入 wasm 编译环境,并在控制台显示;运维人员必须记录对应数据库本次发布输出的密钥。表内已经存在操作员后,后续授权与撤销只能由已有操作员发起;此时不再接受引导密钥越权扩权。
新增 procedure
- `authorize_database_migration_operator`: 授权或更新迁移操作员备注。
- `revoke_database_migration_operator`: 撤销迁移操作员。
运维流程:
```bash
npm run spacetime:publish:maincloud -- --database <database>
# 控制台会输出:
# [spacetime:maincloud] 迁移引导密钥: <本次发布随机密钥>
```
发布完成后,在同一台机器上用当前 `spacetime login` 身份授权操作员:
```bash
node scripts/spacetime-authorize-migration-operator.mjs \
--server maincloud \
--database xushi-p4wfr \
--bootstrap-secret <本次发布随机密钥> \
--operator-identity <identity-hex> \
--note "2026-04-27 migration"
```
迁移完成后可以撤销临时操作员:
```bash
node scripts/spacetime-revoke-migration-operator.mjs \
--server maincloud \
--database xushi-p4wfr \
--operator-identity <identity-hex>
```
生产环境建议迁移完成后用 `--no-migration-bootstrap-secret` 重新发布一个未设置 `GENARRATIVE_SPACETIME_MIGRATION_BOOTSTRAP_SECRET` 的模块版本,避免引导密钥长期留在 wasm 中。
### 发布脚本密钥行为
当前所有会构建或发布 `spacetime-module` 的脚本默认都会生成并显示迁移引导密钥:
- `npm run spacetime:publish:maincloud`:在本机 `cargo build` 前生成密钥,控制台输出 `[spacetime:maincloud] 迁移引导密钥: ...`
- `npm run dev:rust`:在本地 `spacetime publish --module-path` 前生成密钥,控制台输出 `[dev:rust] 迁移引导密钥: ...`
- `npm run deploy:rust:remote`:在构建发布包 wasm 前生成密钥,控制台输出 `[deploy:rust] 迁移引导密钥: ...`,并把同一份密钥写入发布包根目录的 `migration-bootstrap-secret.txt`。服务器执行 `./start.sh` 发布 wasm 时也会再次显示该文件里的密钥。
如果迁移完成后不希望 wasm 继续携带引导密钥,重新发布时传 `--no-migration-bootstrap-secret`。远端发布包若使用 `--skip-spacetime-build`,必须同时传 `--no-migration-bootstrap-secret`,否则脚本会拒绝生成一个无法注入旧 wasm 的新密钥。
### 导出
`export_database_migration_to_file(ctx, input)`
输入字段:
- `include_tables`: 可选表名白名单。为空时导出当前实现支持的全部迁移表。
返回字段:
- `ok`: 是否成功。
- `schema_version`: 迁移 JSON 结构版本。
- `migration_json`: 成功时包含完整迁移 JSON 字符串,失败时为空。
- `table_stats`: 表级导出统计。
- `error_message`: 失败原因。
### 导入
`import_database_migration_from_file(ctx, input)`
输入字段:
- `migration_json`: 导出 procedure 生成的完整迁移 JSON 字符串。
- `include_tables`: 可选表名白名单。为空时导入文件内所有支持表。
- `replace_existing`: 是否先清空目标表。跨服务器全量迁移必须为 `true`
- `dry_run`: 只解析和统计,不写表。
返回字段:
- `ok`: 是否成功。
- `schema_version`: 迁移 JSON 结构版本。
- `migration_json`: 导入场景恒为空,避免重复回传大 JSON。
- `table_stats`: 表级导入或跳过统计。
- `error_message`: 失败原因。
保留 `export_database_migration_to_file` / `import_database_migration_from_file` 名称,是为了减少已经记住的 procedure 名变更;语义上不再代表 module 直接读写文件。
## Node 脚本
本机导出时,先确保本机 SpacetimeDB 服务和源数据库可访问,然后授权本机调用身份:
```bash
node scripts/spacetime-authorize-migration-operator.mjs \
--server dev \
--database xushi-p4wfr \
--bootstrap-secret <本机源库发布时输出的随机密钥> \
--operator-identity <本机 spacetime login show 中的 identity> \
--note "local export"
```
导出脚本负责调用本机源库 procedure 并保存返回 JSON
```bash
node scripts/spacetime-export-migration-json.mjs \
--server dev \
--database xushi-p4wfr \
--out tmp/spacetime-migrations/source-2026-04-27.json
```
`tmp/spacetime-migrations/source-2026-04-27.json` 复制到服务器后,在服务器上登录目标 SpacetimeDB并授权服务器侧调用身份
```bash
node scripts/spacetime-authorize-migration-operator.mjs \
--server maincloud \
--database xushi-p4wfr \
--bootstrap-secret <服务器目标库发布时输出的随机密钥> \
--operator-identity <服务器 spacetime login show 中的 identity> \
--note "server import"
```
导入脚本负责读取服务器本地文件并把 JSON 字符串传入目标库 procedure
```bash
node scripts/spacetime-import-migration-json.mjs \
--server maincloud \
--database xushi-p4wfr \
--in tmp/spacetime-migrations/source-2026-04-27.json \
--replace-existing
```
正式导入前建议先加 `--dry-run`,确认 JSON 可解析、版本匹配、表名都在迁移白名单内。
如需分批迁移,可用逗号分隔表名:
```bash
node scripts/spacetime-export-migration-json.mjs \
--database xushi-p4wfr \
--out tmp/spacetime-migrations/ai.json \
--include ai_task,ai_task_stage,ai_text_chunk,ai_result_reference
```
`--server` 支持 `dev``local``maincloud`,也可以直接传 SpacetimeDB 服务器 URL。脚本默认走 `spacetime call`,使用当前机器的 CLI 登录态。数据库名可通过 `--database``GENARRATIVE_SPACETIME_MAINCLOUD_DATABASE``GENARRATIVE_SPACETIME_DATABASE` 提供。
授权脚本额外支持:
- `--bootstrap-secret``GENARRATIVE_SPACETIME_MIGRATION_BOOTSTRAP_SECRET`
- `--operator-identity``GENARRATIVE_SPACETIME_MIGRATION_OPERATOR_IDENTITY`
- `--note`
## 表范围
首版覆盖当前 private table 报错相关与主运行真相表:
- 认证:`auth_store_snapshot``user_account``auth_identity``refresh_session`
- AI`ai_task``ai_task_stage``ai_text_chunk``ai_result_reference`
- 运行存档与账户投影:`runtime_snapshot``runtime_setting``user_browse_history``profile_dashboard_state``profile_wallet_ledger``profile_invite_code``profile_referral_relation``profile_played_world``profile_membership``profile_recharge_order``profile_save_archive`
- RPG 运行真相:`player_progression``chapter_progression``npc_state``story_session``story_event``inventory_slot``battle_state``treasure_record``quest_record``quest_log`
- 自定义世界:`custom_world_profile``custom_world_session``custom_world_agent_session``custom_world_agent_message``custom_world_agent_operation``custom_world_draft_card``custom_world_gallery_entry`
- 资产索引:`asset_object``asset_entity_binding`
- 拼图:`puzzle_agent_session``puzzle_agent_message``puzzle_work_profile``puzzle_runtime_run`
- 大鱼:`big_fish_creation_session``big_fish_agent_message``big_fish_asset_slot``big_fish_runtime_run`
后续新增 SpacetimeDB 表时,必须同步把表加入迁移白名单与本文档。
## 风险与限制
迁移 JSON 作为 procedure 返回值和 HTTP request body 传递,会受 SpacetimeDB 调用响应体、请求体以及中间代理大小限制。数据量较大时,先按 `include_tables` 分批迁移;若单表本身过大,再补充分片 procedure而不是恢复 HTTP 文件桥。
`spacetime call` 在 PowerShell 中手写 JSON 容易被剥掉双引号。推荐使用仓库里的 Node 脚本,由脚本直接走 HTTP API避免 shell 二次处理和命令行长度限制。

View File

@@ -27,7 +27,8 @@ usage() {
--skip-upload 只生成本地发布包,不上传服务器
--skip-web-build 跳过 Vite 构建,仅用于调试
--skip-api-build 跳过 api-server 构建,仅用于调试
--skip-spacetime-build 跳过 wasm 构建,仅用于调试
--skip-spacetime-build 跳过 wasm 构建,仅用于调试;此时必须同时传 --no-migration-bootstrap-secret
--no-migration-bootstrap-secret 构建不带迁移引导密钥的 spacetime-module wasm
目标服务器要求:
Ubuntu x86_64已安装 node、spacetime CLI并允许执行目标目录内的 start.sh / stop.sh。
@@ -127,6 +128,36 @@ replace_placeholder_in_file() {
sed -i "s|${placeholder}|${escaped_value}|g" "${file_path}"
}
generate_migration_bootstrap_secret() {
node -e 'const crypto = require("crypto"); process.stdout.write(crypto.randomBytes(32).toString("hex"));'
}
prepare_migration_bootstrap_secret() {
case "${MIGRATION_BOOTSTRAP_SECRET_MODE}" in
auto)
MIGRATION_BOOTSTRAP_SECRET="$(generate_migration_bootstrap_secret)"
;;
manual)
if [[ "${#MIGRATION_BOOTSTRAP_SECRET}" -lt 16 ]]; then
echo "[deploy:rust] 迁移引导密钥至少需要 16 个字符。" >&2
exit 1
fi
;;
disabled)
unset GENARRATIVE_SPACETIME_MIGRATION_BOOTSTRAP_SECRET
echo "[deploy:rust] 未启用迁移引导密钥。"
return
;;
*)
echo "[deploy:rust] 未知迁移引导密钥模式: ${MIGRATION_BOOTSTRAP_SECRET_MODE}" >&2
exit 1
;;
esac
export GENARRATIVE_SPACETIME_MIGRATION_BOOTSTRAP_SECRET="${MIGRATION_BOOTSTRAP_SECRET}"
echo "[deploy:rust] 迁移引导密钥: ${MIGRATION_BOOTSTRAP_SECRET}"
}
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd -- "${SCRIPT_DIR}/.." && pwd)"
SERVER_RS_DIR="${REPO_ROOT}/server-rs"
@@ -147,6 +178,8 @@ SKIP_WEB_BUILD=0
SKIP_API_BUILD=0
SKIP_SPACETIME_BUILD=0
BUILD_COMPLETED=0
MIGRATION_BOOTSTRAP_SECRET=""
MIGRATION_BOOTSTRAP_SECRET_MODE="auto"
while [[ $# -gt 0 ]]; do
case "$1" in
@@ -214,6 +247,16 @@ while [[ $# -gt 0 ]]; do
SKIP_SPACETIME_BUILD=1
shift
;;
--migration-bootstrap-secret)
MIGRATION_BOOTSTRAP_SECRET="${2:?缺少 --migration-bootstrap-secret 的值}"
MIGRATION_BOOTSTRAP_SECRET_MODE="manual"
shift 2
;;
--no-migration-bootstrap-secret)
MIGRATION_BOOTSTRAP_SECRET=""
MIGRATION_BOOTSTRAP_SECRET_MODE="disabled"
shift
;;
*)
echo "[deploy:rust] 未知参数: $1" >&2
usage >&2
@@ -227,6 +270,12 @@ if [[ ! "${BUILD_NAME}" =~ ^[0-9A-Za-z._-]+$ ]]; then
exit 1
fi
if [[ "${SKIP_SPACETIME_BUILD}" -eq 1 && "${MIGRATION_BOOTSTRAP_SECRET_MODE}" != "disabled" ]]; then
echo "[deploy:rust] --skip-spacetime-build 无法把迁移引导密钥注入 wasm。" >&2
echo "[deploy:rust] 请移除 --skip-spacetime-build或同时传 --no-migration-bootstrap-secret。" >&2
exit 1
fi
TARGET_DIR="${BUILD_ROOT}/${BUILD_NAME}"
WEB_DIR="${TARGET_DIR}/web"
API_BINARY_SOURCE="${SERVER_RS_DIR}/target/x86_64-unknown-linux-gnu/release/api-server"
@@ -249,6 +298,8 @@ fi
require_command node
require_command cargo
prepare_migration_bootstrap_secret
if [[ "${SKIP_WEB_BUILD}" -ne 1 ]]; then
require_command npm
fi
@@ -310,6 +361,11 @@ fi
copy_required_file "${WASM_SOURCE}" "${TARGET_DIR}/spacetime_module.wasm" "spacetime-module wasm"
if [[ "${MIGRATION_BOOTSTRAP_SECRET_MODE}" != "disabled" ]]; then
printf "%s\n" "${MIGRATION_BOOTSTRAP_SECRET}" >"${TARGET_DIR}/migration-bootstrap-secret.txt"
chmod 600 "${TARGET_DIR}/migration-bootstrap-secret.txt"
fi
cat >"${TARGET_DIR}/web-server.mjs" <<'WEB_SERVER'
import http from 'node:http';
import fs from 'node:fs';
@@ -529,6 +585,7 @@ API_PORT="${GENARRATIVE_API_PORT:-__GENARRATIVE_DEFAULT_API_PORT__}"
API_LOG="${GENARRATIVE_API_LOG:-info,tower_http=info}"
WEB_HOST="${GENARRATIVE_WEB_HOST:-__GENARRATIVE_DEFAULT_WEB_HOST__}"
WEB_PORT="${GENARRATIVE_WEB_PORT:-__GENARRATIVE_DEFAULT_WEB_PORT__}"
MIGRATION_BOOTSTRAP_SECRET_FILE="${SCRIPT_DIR}/migration-bootstrap-secret.txt"
# 日志默认落文件,显式关闭 ANSI 颜色码,避免控制字符写入 *.log。
export NO_COLOR="${NO_COLOR:-1}"
@@ -778,6 +835,11 @@ if [[ "${CLEAR_DATABASE}" -eq 1 ]]; then
fi
echo "[start] 发布 SpacetimeDB wasm: ${SPACETIME_DATABASE}"
if [[ -f "${MIGRATION_BOOTSTRAP_SECRET_FILE}" ]]; then
echo "[start] 迁移引导密钥: $(cat "${MIGRATION_BOOTSTRAP_SECRET_FILE}")"
else
echo "[start] 未启用迁移引导密钥。"
fi
if ! spacetime --root-dir="${SPACETIME_ROOT_DIR}" "${PUBLISH_ARGS[@]}"; then
echo "[start] SpacetimeDB 发布失败。" >&2
echo "[start] 如果错误包含 403 Forbidden 或 is not authorized通常是当前 CLI 身份无权更新目标数据库。" >&2
@@ -868,6 +930,7 @@ cat >"${TARGET_DIR}/README.md" <<EOF
- \`web/\`Vite release 静态资源
- \`api-server\`x86_64-unknown-linux-gnu release 可执行文件
- \`spacetime_module.wasm\`wasm32-unknown-unknown release 模块
- \`migration-bootstrap-secret.txt\`:本发布包 wasm 编译时注入的迁移引导密钥;服务器 \`start.sh\` 发布时会显示,迁移授权完成后可删除
- \`web-server.mjs\`:静态网站与 API 反代入口
- \`start.sh\` / \`stop.sh\`:目标服务器启动与停止脚本
@@ -896,6 +959,7 @@ cat >"${TARGET_DIR}/README.md" <<EOF
- \`GENARRATIVE_SPACETIME_ROOT_DIR\`:默认使用发布目录下的 \`.spacetimedb/\`,同时承载本地 SpacetimeDB 运行数据与 CLI 身份。
- \`GENARRATIVE_SPACETIME_TIMEOUT_SECONDS\`:等待 SpacetimeDB 就绪的秒数,默认 \`60\`。
- OSS、LLM、短信、微信等业务密钥仍通过目标服务器环境变量或同目录 \`.env.local\` 管理。
- 迁移引导密钥由构建发布包时随机生成,构建日志和服务器 \`start.sh\` 发布日志都会显示同一份密钥。
EOF
BUILD_COMPLETED=1

View File

@@ -10,6 +10,7 @@ usage() {
./scripts/dev-rust-stack.sh --api-timeout-seconds 600
./scripts/dev-rust-stack.sh --skip-spacetime --skip-publish
./scripts/dev-rust-stack.sh --preserve-database
./scripts/dev-rust-stack.sh --no-migration-bootstrap-secret
npm run dev:rust:logs -- --follow
说明:
@@ -17,6 +18,7 @@ usage() {
2. 当前开发阶段默认 publish server-rs/crates/spacetime-module 时追加 -c=on-conflict 在结构冲突时清理旧模块数据。
3. 只有显式传入 --preserve-database 时,才会跳过 -c=on-conflict。
4. SpacetimeDB 默认使用 server-rs/.spacetimedb/local 作为本地数据与日志目录。
5. 默认在发布模块前随机生成迁移引导密钥,注入 GENARRATIVE_SPACETIME_MIGRATION_BOOTSTRAP_SECRET 并显示在控制台。
EOF
}
@@ -223,6 +225,36 @@ sync_local_spacetime_install() {
fi
}
generate_migration_bootstrap_secret() {
node -e 'const crypto = require("crypto"); process.stdout.write(crypto.randomBytes(32).toString("hex"));'
}
prepare_migration_bootstrap_secret() {
case "${MIGRATION_BOOTSTRAP_SECRET_MODE}" in
auto)
MIGRATION_BOOTSTRAP_SECRET="$(generate_migration_bootstrap_secret)"
;;
manual)
if [[ "${#MIGRATION_BOOTSTRAP_SECRET}" -lt 16 ]]; then
echo "[dev:rust] 迁移引导密钥至少需要 16 个字符。" >&2
exit 1
fi
;;
disabled)
unset GENARRATIVE_SPACETIME_MIGRATION_BOOTSTRAP_SECRET
echo "[dev:rust] 未启用迁移引导密钥。"
return
;;
*)
echo "[dev:rust] 未知迁移引导密钥模式: ${MIGRATION_BOOTSTRAP_SECRET_MODE}" >&2
exit 1
;;
esac
export GENARRATIVE_SPACETIME_MIGRATION_BOOTSTRAP_SECRET="${MIGRATION_BOOTSTRAP_SECRET}"
echo "[dev:rust] 迁移引导密钥: ${MIGRATION_BOOTSTRAP_SECRET}"
}
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd -- "${SCRIPT_DIR}/.." && pwd)"
SERVER_RS_DIR="${REPO_ROOT}/server-rs"
@@ -244,6 +276,8 @@ API_SERVER_TIMEOUT_SECONDS="300"
SKIP_SPACETIME=0
SKIP_PUBLISH=0
PRESERVE_DATABASE=0
MIGRATION_BOOTSTRAP_SECRET=""
MIGRATION_BOOTSTRAP_SECRET_MODE="auto"
PIDS=()
NAMES=()
@@ -334,6 +368,16 @@ while [[ $# -gt 0 ]]; do
PRESERVE_DATABASE=1
shift
;;
--migration-bootstrap-secret)
MIGRATION_BOOTSTRAP_SECRET="${2:?缺少 --migration-bootstrap-secret 的值}"
MIGRATION_BOOTSTRAP_SECRET_MODE="manual"
shift 2
;;
--no-migration-bootstrap-secret)
MIGRATION_BOOTSTRAP_SECRET=""
MIGRATION_BOOTSTRAP_SECRET_MODE="disabled"
shift
;;
*)
echo "[dev:rust] 未知参数: $1" >&2
usage >&2
@@ -417,6 +461,7 @@ fi
if [[ "${SKIP_PUBLISH}" -ne 1 ]]; then
echo "[dev:rust] 等待 SpacetimeDB 就绪"
wait_for_spacetime "${SPACETIME_SERVER}" "${SPACETIME_TIMEOUT_SECONDS}" "${SPACETIME_ROOT_DIR}" "${PIDS[0]:-}"
prepare_migration_bootstrap_secret
PUBLISH_ARGS=(
publish

View File

@@ -0,0 +1,35 @@
#!/usr/bin/env node
import {
callSpacetimeProcedureViaCli,
ensureProcedureOk,
parseArgs,
} from './spacetime-migration-common.mjs';
try {
const options = parseArgs(process.argv.slice(2));
if (!options.operatorIdentity) {
throw new Error('必须传入 --operator-identity。');
}
const input = {
bootstrap_secret: options.bootstrapSecret || '',
operator_identity_hex: options.operatorIdentity,
note: options.note || '',
};
const result = await callSpacetimeProcedureViaCli(
options,
'authorize_database_migration_operator',
input,
);
ensureProcedureOk(result);
console.log(
`[spacetime:migration:operator] 已授权 ${result.operator_identity_hex ?? options.operatorIdentity}`,
);
} catch (error) {
console.error(
`[spacetime:migration:operator] ${error instanceof Error ? error.message : String(error)}`,
);
process.exit(1);
}

View File

@@ -0,0 +1,55 @@
#!/usr/bin/env node
import { writeFile } from 'node:fs/promises';
import path from 'node:path';
import {
callSpacetimeProcedureAuto,
ensureParentDir,
ensureProcedureOk,
parseArgs,
} from './spacetime-migration-common.mjs';
try {
const options = parseArgs(process.argv.slice(2));
if (!options.out) {
throw new Error('必须传入 --out。');
}
const input = {
include_tables: options.includeTables,
};
const result = await callSpacetimeProcedureAuto(
options,
'export_database_migration_to_file',
input,
);
ensureProcedureOk(result);
if (typeof result.migration_json !== 'string' || result.migration_json.trim() === '') {
throw new Error('导出 procedure 没有返回 migration_json。');
}
const outPath = path.resolve(options.out);
await ensureParentDir(outPath);
await writeFile(outPath, result.migration_json, 'utf8');
console.log(`[spacetime:migration:export] 已写入 ${outPath}`);
printTableStats(result.table_stats);
} catch (error) {
console.error(
`[spacetime:migration:export] ${error instanceof Error ? error.message : String(error)}`,
);
process.exit(1);
}
function printTableStats(tableStats) {
if (!Array.isArray(tableStats) || tableStats.length === 0) {
return;
}
const rows = tableStats.map((stat) => ({
table: stat.table_name,
exported: stat.exported_row_count,
}));
console.table(rows);
}

View File

@@ -0,0 +1,60 @@
#!/usr/bin/env node
import { readFile } from 'node:fs/promises';
import path from 'node:path';
import {
assertReadableFile,
callSpacetimeProcedureAuto,
ensureProcedureOk,
parseArgs,
} from './spacetime-migration-common.mjs';
try {
const options = parseArgs(process.argv.slice(2));
if (!options.in) {
throw new Error('必须传入 --in。');
}
const inPath = path.resolve(options.in);
await assertReadableFile(inPath);
const migrationJson = await readFile(inPath, 'utf8');
if (!migrationJson.trim()) {
throw new Error(`迁移文件为空: ${inPath}`);
}
const input = {
migration_json: migrationJson,
include_tables: options.includeTables,
replace_existing: options.replaceExisting === true,
dry_run: options.dryRun === true,
};
const result = await callSpacetimeProcedureAuto(
options,
'import_database_migration_from_file',
input,
);
ensureProcedureOk(result);
console.log(
`[spacetime:migration:import] ${options.dryRun ? 'dry-run 完成' : '导入完成'}: ${inPath}`,
);
printTableStats(result.table_stats);
} catch (error) {
console.error(
`[spacetime:migration:import] ${error instanceof Error ? error.message : String(error)}`,
);
process.exit(1);
}
function printTableStats(tableStats) {
if (!Array.isArray(tableStats) || tableStats.length === 0) {
return;
}
const rows = tableStats.map((stat) => ({
table: stat.table_name,
imported: stat.imported_row_count,
skipped: stat.skipped_row_count,
}));
console.table(rows);
}

View File

@@ -0,0 +1,337 @@
import { spawn } from 'node:child_process';
import { access, mkdir } from 'node:fs/promises';
import path from 'node:path';
export function parseArgs(argv) {
const options = {
database:
process.env.GENARRATIVE_SPACETIME_MAINCLOUD_DATABASE ||
process.env.GENARRATIVE_SPACETIME_DATABASE ||
'',
bootstrapSecret: process.env.GENARRATIVE_SPACETIME_MIGRATION_BOOTSTRAP_SECRET || '',
includeTables: [],
operatorIdentity: process.env.GENARRATIVE_SPACETIME_MIGRATION_OPERATOR_IDENTITY || '',
passthrough: [],
note: '',
server:
process.env.GENARRATIVE_SPACETIME_MAINCLOUD_SERVER ||
process.env.GENARRATIVE_SPACETIME_SERVER ||
'',
serverUrl:
process.env.GENARRATIVE_SPACETIME_MAINCLOUD_SERVER_URL ||
process.env.GENARRATIVE_SPACETIME_SERVER_URL ||
'',
token:
process.env.GENARRATIVE_SPACETIME_MAINCLOUD_TOKEN ||
process.env.GENARRATIVE_SPACETIME_TOKEN ||
'',
};
for (let index = 0; index < argv.length; index += 1) {
const arg = argv[index];
const readValue = (name) => {
const value = argv[index + 1];
if (!value || value.startsWith('--')) {
throw new Error(`${name} 缺少参数值。`);
}
index += 1;
return value;
};
if (arg === '--server') {
options.server = readValue(arg);
} else if (arg === '--use-http') {
options.useHttp = true;
} else if (arg === '--server-url') {
options.serverUrl = readValue(arg);
} else if (arg === '--token') {
options.token = readValue(arg);
} else if (arg === '--bootstrap-secret') {
options.bootstrapSecret = readValue(arg);
} else if (arg === '--operator-identity') {
options.operatorIdentity = readValue(arg);
} else if (arg === '--note') {
options.note = readValue(arg);
} else if (arg === '--root-dir') {
options.rootDir = readValue(arg);
} else if (arg === '--database') {
options.database = readValue(arg);
} else if (arg === '--out') {
options.out = readValue(arg);
} else if (arg === '--in') {
options.in = readValue(arg);
} else if (arg === '--include') {
options.includeTables = readValue(arg)
.split(',')
.map((value) => value.trim())
.filter(Boolean);
} else if (arg === '--replace-existing') {
options.replaceExisting = true;
} else if (arg === '--dry-run') {
options.dryRun = true;
} else if (arg === '--anonymous' || arg === '--no-config') {
options.passthrough.push(arg);
} else {
throw new Error(`未知参数: ${arg}`);
}
}
return options;
}
export function buildSpacetimeCallArgs(options, procedureName, input) {
if (!options.database) {
throw new Error('必须传入 --database。');
}
const args = [];
if (options.rootDir) {
args.push(`--root-dir=${options.rootDir}`);
}
args.push('call');
if (options.server) {
args.push('-s', options.server);
}
args.push(...options.passthrough);
args.push(options.database, procedureName, JSON.stringify(input), '-y');
return args;
}
export async function callSpacetimeProcedure(options, procedureName, input) {
if (!options.database) {
throw new Error('必须传入 --database或设置 GENARRATIVE_SPACETIME_DATABASE。');
}
const serverUrl = resolveServerUrl(options).replace(/\/+$/u, '');
const url = `${serverUrl}/v1/database/${encodeURIComponent(options.database)}/call/${encodeURIComponent(procedureName)}`;
const headers = {
'content-type': 'application/json; charset=utf-8',
};
if (options.token) {
headers.authorization = `Bearer ${options.token}`;
}
let response;
try {
response = await fetch(url, {
method: 'POST',
headers,
body: JSON.stringify([input]),
});
} catch (error) {
throw new Error(
`SpacetimeDB HTTP 请求失败: ${url}; ${error instanceof Error ? error.message : String(error)}`,
);
}
const text = await response.text();
if (!response.ok) {
throw new Error(
`SpacetimeDB HTTP ${response.status}: ${trimPreview(text)}${buildHttpAuthHint(text)}`,
);
}
return parseProcedureResult(text);
}
export async function callSpacetimeProcedureAuto(options, procedureName, input) {
if (options.useHttp) {
return callSpacetimeProcedure(options, procedureName, input);
}
return callSpacetimeProcedureViaCli(options, procedureName, input);
}
export async function callSpacetimeProcedureViaCli(options, procedureName, input) {
const args = buildSpacetimeCallArgs(options, procedureName, input);
const output = await runSpacetimeCli(args);
return parseProcedureResult(output);
}
export function parseProcedureResult(output) {
const candidates = [];
const trimmed = output.trim();
if (trimmed) {
candidates.push(trimmed);
}
for (const line of output.split(/\r?\n/u)) {
const value = line.trim();
if (value.startsWith('{') || value.startsWith('[')) {
candidates.push(value);
}
}
for (const candidate of candidates) {
try {
return normalizeProcedureResult(JSON.parse(candidate));
} catch {
// SpacetimeDB CLI 在不同版本中可能附带说明文本,继续尝试后续候选。
}
}
throw new Error(`无法解析 procedure 返回值: ${trimmed}`);
}
export function ensureProcedureOk(result) {
if (!result.ok) {
throw new Error(result.error_message ?? '迁移 procedure 返回失败。');
}
}
export async function ensureParentDir(filePath) {
await mkdir(path.dirname(path.resolve(filePath)), { recursive: true });
}
export async function assertReadableFile(filePath) {
await access(path.resolve(filePath));
}
function normalizeProcedureResult(value) {
if (value && typeof value === 'object' && !Array.isArray(value)) {
return value;
}
if (Array.isArray(value)) {
return normalizeSatsProduct(value);
}
throw new Error('procedure 返回值不是对象。');
}
function normalizeSatsProduct(value) {
if (value.length === 3) {
return {
ok: normalizeSatsValue(value[0]),
operator_identity_hex: normalizeSatsOption(value[1]),
error_message: normalizeSatsOption(value[2]),
};
}
return {
ok: normalizeSatsValue(value[0]),
schema_version: normalizeSatsValue(value[1]),
migration_json: normalizeSatsOption(value[2]),
table_stats: normalizeTableStats(value[3]),
error_message: normalizeSatsOption(value[4]),
};
}
function normalizeSatsValue(value) {
if (Array.isArray(value)) {
return value.map((item) => normalizeSatsValue(item));
}
if (value && typeof value === 'object') {
return Object.fromEntries(
Object.entries(value).map(([key, entry]) => [key, normalizeSatsValue(entry)]),
);
}
return value;
}
function normalizeSatsOption(value) {
if (Array.isArray(value)) {
if (value.length === 2 && value[0] === 0) {
return normalizeSatsValue(value[1]);
}
if (value.length === 0 || value[0] === 1) {
return null;
}
}
return normalizeSatsValue(value);
}
function normalizeTableStats(value) {
if (!Array.isArray(value)) {
return [];
}
return value.map((entry) => {
if (entry && typeof entry === 'object' && !Array.isArray(entry)) {
return normalizeSatsValue(entry);
}
if (Array.isArray(entry)) {
return {
table_name: normalizeSatsValue(entry[0]),
exported_row_count: normalizeSatsValue(entry[1]),
imported_row_count: normalizeSatsValue(entry[2]),
skipped_row_count: normalizeSatsValue(entry[3]),
};
}
return entry;
});
}
function resolveServerUrl(options) {
if (options.serverUrl) {
return options.serverUrl;
}
const server = (options.server || 'maincloud').trim();
if (server.startsWith('http://') || server.startsWith('https://')) {
return server;
}
if (server === 'dev') {
return 'http://127.0.0.1:3101';
}
if (server === 'local') {
return 'http://127.0.0.1:3000';
}
if (!server || server === 'maincloud') {
return 'https://maincloud.spacetimedb.com';
}
throw new Error(`未知 SpacetimeDB server: ${server}。请改用 --server-url 显式传入地址。`);
}
function trimPreview(text) {
const trimmed = text.trim();
if (trimmed.length <= 4000) {
return trimmed;
}
return `${trimmed.slice(0, 4000)}...`;
}
function buildHttpAuthHint(text) {
if (!text.includes('InvalidSignature') && !text.includes('TokenError')) {
return '';
}
return '。提示:这里需要 SpacetimeDB 客户端连接 token不是 `spacetime login show --token` 输出的 CLI 登录 token授权/撤销请直接使用 CLI 登录态,不要传 --token。';
}
function runSpacetimeCli(args) {
return new Promise((resolve, reject) => {
const child = spawn('spacetime', args, {
cwd: process.cwd(),
shell: false,
stdio: ['ignore', 'pipe', 'pipe'],
});
let output = '';
child.stdout.on('data', (chunk) => {
output += chunk.toString();
});
child.stderr.on('data', (chunk) => {
output += chunk.toString();
});
child.on('error', reject);
child.on('exit', (code, signal) => {
if (signal) {
reject(new Error(`spacetime call 被信号中断: ${signal}`));
return;
}
if (code !== 0) {
reject(new Error(`spacetime call 失败,退出码 ${code}: ${trimPreview(output)}`));
return;
}
resolve(output);
});
});
}

View File

@@ -7,6 +7,8 @@ SERVER_RS_DIR="${REPO_ROOT}/server-rs"
MODULE_PATH="${SERVER_RS_DIR}/target/wasm32-unknown-unknown/release/spacetime_module.wasm"
SPACETIME_SERVER_ALIAS="maincloud"
CLEAR_DATABASE=0
MIGRATION_BOOTSTRAP_SECRET=""
MIGRATION_BOOTSTRAP_SECRET_MODE="auto"
load_env_file() {
local env_file="$1"
@@ -39,13 +41,45 @@ usage() {
npm run spacetime:publish:maincloud
npm run spacetime:publish:maincloud -- --database <database>
npm run spacetime:publish:maincloud -- --clear-database
npm run spacetime:publish:maincloud -- --no-migration-bootstrap-secret
说明:
发布 server-rs/crates/spacetime-module 到 SpacetimeDB Maincloud。
数据库名优先读取 --database其次读取 GENARRATIVE_SPACETIME_MAINCLOUD_DATABASE。
默认在构建 wasm 前随机生成迁移引导密钥,注入 GENARRATIVE_SPACETIME_MIGRATION_BOOTSTRAP_SECRET 并显示在控制台。
EOF
}
generate_migration_bootstrap_secret() {
node -e 'const crypto = require("crypto"); process.stdout.write(crypto.randomBytes(32).toString("hex"));'
}
prepare_migration_bootstrap_secret() {
case "${MIGRATION_BOOTSTRAP_SECRET_MODE}" in
auto)
MIGRATION_BOOTSTRAP_SECRET="$(generate_migration_bootstrap_secret)"
;;
manual)
if [[ "${#MIGRATION_BOOTSTRAP_SECRET}" -lt 16 ]]; then
echo "[spacetime:maincloud] 迁移引导密钥至少需要 16 个字符。" >&2
exit 1
fi
;;
disabled)
unset GENARRATIVE_SPACETIME_MIGRATION_BOOTSTRAP_SECRET
echo "[spacetime:maincloud] 未启用迁移引导密钥。"
return
;;
*)
echo "[spacetime:maincloud] 未知迁移引导密钥模式: ${MIGRATION_BOOTSTRAP_SECRET_MODE}" >&2
exit 1
;;
esac
export GENARRATIVE_SPACETIME_MIGRATION_BOOTSTRAP_SECRET="${MIGRATION_BOOTSTRAP_SECRET}"
echo "[spacetime:maincloud] 迁移引导密钥: ${MIGRATION_BOOTSTRAP_SECRET}"
}
load_env_file "${REPO_ROOT}/.env"
load_env_file "${REPO_ROOT}/.env.local"
@@ -70,6 +104,16 @@ while [[ $# -gt 0 ]]; do
CLEAR_DATABASE=1
shift
;;
--migration-bootstrap-secret)
MIGRATION_BOOTSTRAP_SECRET="${2:?缺少 --migration-bootstrap-secret 的值}"
MIGRATION_BOOTSTRAP_SECRET_MODE="manual"
shift 2
;;
--no-migration-bootstrap-secret)
MIGRATION_BOOTSTRAP_SECRET=""
MIGRATION_BOOTSTRAP_SECRET_MODE="disabled"
shift
;;
*)
echo "[spacetime:maincloud] 未知参数: $1" >&2
usage >&2
@@ -89,11 +133,18 @@ if ! command -v cargo >/dev/null 2>&1; then
exit 1
fi
if ! command -v node >/dev/null 2>&1; then
echo "[spacetime:maincloud] 缺少 node 命令,无法生成迁移引导密钥。" >&2
exit 1
fi
if ! command -v spacetime >/dev/null 2>&1; then
echo "[spacetime:maincloud] 缺少 spacetime CLI请先安装并登录 Maincloud。" >&2
exit 1
fi
prepare_migration_bootstrap_secret
echo "[spacetime:maincloud] 构建 spacetime-module wasm"
cargo build \
--manifest-path "${SERVER_RS_DIR}/Cargo.toml" \

View File

@@ -0,0 +1,33 @@
#!/usr/bin/env node
import {
callSpacetimeProcedureViaCli,
ensureProcedureOk,
parseArgs,
} from './spacetime-migration-common.mjs';
try {
const options = parseArgs(process.argv.slice(2));
if (!options.operatorIdentity) {
throw new Error('必须传入 --operator-identity。');
}
const input = {
operator_identity_hex: options.operatorIdentity,
};
const result = await callSpacetimeProcedureViaCli(
options,
'revoke_database_migration_operator',
input,
);
ensureProcedureOk(result);
console.log(
`[spacetime:migration:operator] 已撤销 ${result.operator_identity_hex ?? options.operatorIdentity}`,
);
} catch (error) {
console.error(
`[spacetime:migration:operator] ${error instanceof Error ? error.message : String(error)}`,
);
process.exit(1);
}

1
server-rs/Cargo.lock generated
View File

@@ -2698,6 +2698,7 @@ dependencies = [
"serde_json",
"shared-kernel",
"spacetimedb",
"spacetimedb-lib",
]
[[package]]

View File

@@ -11,6 +11,7 @@ crate-type = ["cdylib"]
log = { workspace = true }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
spacetimedb-lib = { version = "=2.1.0", default-features = false, features = ["serde"] }
module-ai = { path = "../module-ai", default-features = false, features = ["spacetime-types"] }
module-assets = { path = "../module-assets", default-features = false, features = ["spacetime-types"] }
module-big-fish = { path = "../module-big-fish", default-features = false, features = ["spacetime-types"] }

View File

@@ -20,7 +20,9 @@ use module_quest::{
};
pub(crate) use serde_json::{Map as JsonMap, Value as JsonValue, json};
pub(crate) use shared_kernel::format_timestamp_micros;
pub use spacetimedb::{ProcedureContext, ReducerContext, SpacetimeType, Table, Timestamp};
pub use spacetimedb::{
Identity, ProcedureContext, ReducerContext, SpacetimeType, Table, Timestamp,
};
use std::collections::HashSet;
mod ai;
@@ -29,6 +31,7 @@ mod auth;
mod big_fish;
mod domain_types;
mod entry;
mod migration;
mod puzzle;
mod runtime;
@@ -38,6 +41,7 @@ pub use auth::*;
pub use big_fish::*;
pub use domain_types::*;
pub use entry::*;
pub use migration::*;
pub use runtime::*;
#[spacetimedb::table(accessor = player_progression)]

View File

@@ -0,0 +1,648 @@
use crate::*;
use serde::{Deserialize, Serialize};
use spacetimedb_lib::sats::de::serde::DeserializeWrapper;
use spacetimedb_lib::sats::ser::serde::SerializeWrapper;
use std::collections::HashSet;
use crate::puzzle::{
puzzle_agent_message, puzzle_agent_session, puzzle_runtime_run, puzzle_work_profile,
};
const MIGRATION_SCHEMA_VERSION: u32 = 1;
const MIGRATION_MAX_TABLE_NAME_LEN: usize = 96;
const MIGRATION_MAX_OPERATOR_NOTE_CHARS: usize = 160;
const MIGRATION_MIN_BOOTSTRAP_SECRET_LEN: usize = 16;
const MIGRATION_BOOTSTRAP_SECRET: Option<&str> =
option_env!("GENARRATIVE_SPACETIME_MIGRATION_BOOTSTRAP_SECRET");
#[spacetimedb::table(accessor = database_migration_operator)]
pub struct DatabaseMigrationOperator {
#[primary_key]
pub operator_identity: Identity,
pub created_at: Timestamp,
pub created_by: Identity,
pub note: String,
}
#[derive(Clone, Debug, PartialEq, Eq, SpacetimeType)]
pub struct DatabaseMigrationExportInput {
pub include_tables: Vec<String>,
}
#[derive(Clone, Debug, PartialEq, Eq, SpacetimeType)]
pub struct DatabaseMigrationImportInput {
pub migration_json: String,
pub include_tables: Vec<String>,
pub replace_existing: bool,
pub dry_run: bool,
}
#[derive(Clone, Debug, PartialEq, Eq, SpacetimeType)]
pub struct DatabaseMigrationAuthorizeOperatorInput {
pub bootstrap_secret: String,
pub operator_identity_hex: String,
pub note: String,
}
#[derive(Clone, Debug, PartialEq, Eq, SpacetimeType)]
pub struct DatabaseMigrationRevokeOperatorInput {
pub operator_identity_hex: String,
}
#[derive(Clone, Debug, PartialEq, Eq, SpacetimeType)]
pub struct DatabaseMigrationTableStat {
pub table_name: String,
pub exported_row_count: u64,
pub imported_row_count: u64,
pub skipped_row_count: u64,
}
#[derive(Clone, Debug, PartialEq, Eq, SpacetimeType)]
pub struct DatabaseMigrationProcedureResult {
pub ok: bool,
pub schema_version: u32,
pub migration_json: Option<String>,
pub table_stats: Vec<DatabaseMigrationTableStat>,
pub error_message: Option<String>,
}
#[derive(Clone, Debug, PartialEq, Eq, SpacetimeType)]
pub struct DatabaseMigrationOperatorProcedureResult {
pub ok: bool,
pub operator_identity_hex: Option<String>,
pub error_message: Option<String>,
}
#[derive(Serialize, Deserialize)]
struct MigrationFile {
schema_version: u32,
exported_at_micros: i64,
tables: Vec<MigrationTable>,
}
#[derive(Serialize, Deserialize)]
struct MigrationTable {
name: String,
rows: Vec<serde_json::Value>,
}
macro_rules! migration_tables {
($macro_name:ident $(, $arg:expr)* $(,)?) => {
$macro_name! {
$($arg,)*
auth_store_snapshot,
user_account,
auth_identity,
refresh_session,
ai_task,
ai_task_stage,
ai_text_chunk,
ai_result_reference,
runtime_snapshot,
runtime_setting,
user_browse_history,
profile_dashboard_state,
profile_wallet_ledger,
profile_invite_code,
profile_referral_relation,
profile_played_world,
profile_membership,
profile_recharge_order,
profile_save_archive,
player_progression,
chapter_progression,
npc_state,
story_session,
story_event,
inventory_slot,
battle_state,
treasure_record,
quest_record,
quest_log,
custom_world_profile,
custom_world_session,
custom_world_agent_session,
custom_world_agent_message,
custom_world_agent_operation,
custom_world_draft_card,
custom_world_gallery_entry,
asset_object,
asset_entity_binding,
puzzle_agent_session,
puzzle_agent_message,
puzzle_work_profile,
puzzle_runtime_run,
big_fish_creation_session,
big_fish_agent_message,
big_fish_asset_slot,
big_fish_runtime_run
}
};
}
macro_rules! collect_all_migration_tables {
($ctx:expr, $include_tables:expr, $tables:expr) => {
migration_tables!(collect_migration_table, $ctx, $include_tables, $tables);
};
}
macro_rules! collect_migration_table {
($ctx:expr, $include_tables:expr, $tables:expr, $($table:ident),+ $(,)?) => {
$(
if should_include_table($include_tables, stringify!($table)) {
let rows = $ctx
.db
.$table()
.iter()
.map(|row| row_to_json(&row))
.collect::<Result<Vec<_>, _>>()?;
$tables.push(MigrationTable {
name: stringify!($table).to_string(),
rows,
});
}
)+
};
}
macro_rules! clear_all_migration_tables {
($ctx:expr, $include_tables:expr) => {
migration_tables!(clear_migration_table, $ctx, $include_tables);
};
}
macro_rules! clear_migration_table {
($ctx:expr, $include_tables:expr, $($table:ident),+ $(,)?) => {
$(
if should_include_table($include_tables, stringify!($table)) {
for row in $ctx.db.$table().iter().collect::<Vec<_>>() {
$ctx.db.$table().delete(row);
}
}
)+
};
}
// 迁移权限独立存表,避免把 private 表导出能力开放给任意登录身份。
#[spacetimedb::procedure]
pub fn authorize_database_migration_operator(
ctx: &mut ProcedureContext,
input: DatabaseMigrationAuthorizeOperatorInput,
) -> DatabaseMigrationOperatorProcedureResult {
match authorize_database_migration_operator_inner(ctx, input) {
Ok(operator_identity_hex) => DatabaseMigrationOperatorProcedureResult {
ok: true,
operator_identity_hex: Some(operator_identity_hex),
error_message: None,
},
Err(error) => DatabaseMigrationOperatorProcedureResult {
ok: false,
operator_identity_hex: None,
error_message: Some(error),
},
}
}
#[spacetimedb::procedure]
pub fn revoke_database_migration_operator(
ctx: &mut ProcedureContext,
input: DatabaseMigrationRevokeOperatorInput,
) -> DatabaseMigrationOperatorProcedureResult {
match revoke_database_migration_operator_inner(ctx, input) {
Ok(operator_identity_hex) => DatabaseMigrationOperatorProcedureResult {
ok: true,
operator_identity_hex: Some(operator_identity_hex),
error_message: None,
},
Err(error) => DatabaseMigrationOperatorProcedureResult {
ok: false,
operator_identity_hex: None,
error_message: Some(error),
},
}
}
// 迁移导出走 procedure 返回 JSON 字符串,避免 reducer 无返回值且不能读取 private 表给外部。
#[spacetimedb::procedure]
pub fn export_database_migration_to_file(
ctx: &mut ProcedureContext,
input: DatabaseMigrationExportInput,
) -> DatabaseMigrationProcedureResult {
match export_database_migration_to_file_inner(ctx, input) {
Ok((migration_json, stats)) => DatabaseMigrationProcedureResult {
ok: true,
schema_version: MIGRATION_SCHEMA_VERSION,
migration_json: Some(migration_json),
table_stats: stats,
error_message: None,
},
Err(error) => DatabaseMigrationProcedureResult {
ok: false,
schema_version: MIGRATION_SCHEMA_VERSION,
migration_json: None,
table_stats: Vec::new(),
error_message: Some(error),
},
}
}
// 迁移导入由 Node 侧读文件后把 JSON 字符串传入procedure 只负责校验和写表事务。
#[spacetimedb::procedure]
pub fn import_database_migration_from_file(
ctx: &mut ProcedureContext,
input: DatabaseMigrationImportInput,
) -> DatabaseMigrationProcedureResult {
match import_database_migration_from_file_inner(ctx, input) {
Ok(stats) => DatabaseMigrationProcedureResult {
ok: true,
schema_version: MIGRATION_SCHEMA_VERSION,
migration_json: None,
table_stats: stats,
error_message: None,
},
Err(error) => DatabaseMigrationProcedureResult {
ok: false,
schema_version: MIGRATION_SCHEMA_VERSION,
migration_json: None,
table_stats: Vec::new(),
error_message: Some(error),
},
}
}
fn export_database_migration_to_file_inner(
ctx: &mut ProcedureContext,
input: DatabaseMigrationExportInput,
) -> Result<(String, Vec<DatabaseMigrationTableStat>), String> {
let caller = ctx.sender();
let included_tables = normalize_include_tables(&input.include_tables)?;
let exported_at_micros = ctx.timestamp.to_micros_since_unix_epoch();
let migration_file = ctx.try_with_tx(|tx| {
require_migration_operator(tx, caller)?;
build_migration_file(tx, exported_at_micros, included_tables.as_ref())
})?;
let stats = build_export_stats(&migration_file.tables);
let content = serde_json::to_string_pretty(&migration_file)
.map_err(|error| format!("迁移文件序列化失败: {error}"))?;
Ok((content, stats))
}
fn import_database_migration_from_file_inner(
ctx: &mut ProcedureContext,
input: DatabaseMigrationImportInput,
) -> Result<Vec<DatabaseMigrationTableStat>, String> {
let caller = ctx.sender();
let included_tables = normalize_include_tables(&input.include_tables)?;
if input.migration_json.trim().is_empty() {
return Err("migration_json 不能为空".to_string());
}
ctx.try_with_tx(|tx| require_migration_operator(tx, caller))?;
let migration_file = serde_json::from_str::<MigrationFile>(&input.migration_json)
.map_err(|error| format!("迁移文件 JSON 解析失败: {error}"))?;
if migration_file.schema_version != MIGRATION_SCHEMA_VERSION {
return Err(format!(
"迁移文件 schema_version 不匹配,期望 {},实际 {}",
MIGRATION_SCHEMA_VERSION, migration_file.schema_version
));
}
let stats = if input.dry_run {
build_import_dry_run_stats(&migration_file.tables, included_tables.as_ref())?
} else {
ctx.try_with_tx(|tx| {
require_migration_operator(tx, caller)?;
apply_migration_file(
tx,
&migration_file,
included_tables.as_ref(),
input.replace_existing,
)
})?
};
Ok(stats)
}
fn authorize_database_migration_operator_inner(
ctx: &mut ProcedureContext,
input: DatabaseMigrationAuthorizeOperatorInput,
) -> Result<String, String> {
let caller = ctx.sender();
let operator_identity = parse_migration_operator_identity(&input.operator_identity_hex)?;
let note = normalize_migration_operator_note(&input.note)?;
let bootstrap_secret = input.bootstrap_secret.trim().to_string();
ctx.try_with_tx(|tx| {
authorize_database_migration_operator_tx(
tx,
caller,
operator_identity,
&bootstrap_secret,
note.clone(),
)
})?;
Ok(operator_identity.to_hex().to_string())
}
fn revoke_database_migration_operator_inner(
ctx: &mut ProcedureContext,
input: DatabaseMigrationRevokeOperatorInput,
) -> Result<String, String> {
let caller = ctx.sender();
let operator_identity = parse_migration_operator_identity(&input.operator_identity_hex)?;
ctx.try_with_tx(|tx| {
require_migration_operator(tx, caller)?;
if tx
.db
.database_migration_operator()
.operator_identity()
.find(&operator_identity)
.is_none()
{
return Err("迁移操作员不存在".to_string());
}
tx.db
.database_migration_operator()
.operator_identity()
.delete(&operator_identity);
Ok(())
})?;
Ok(operator_identity.to_hex().to_string())
}
fn authorize_database_migration_operator_tx(
ctx: &ReducerContext,
caller: Identity,
operator_identity: Identity,
bootstrap_secret: &str,
note: String,
) -> Result<(), String> {
let has_operator = ctx.db.database_migration_operator().iter().next().is_some();
if has_operator {
require_migration_operator(ctx, caller)?;
} else {
require_migration_bootstrap_secret(bootstrap_secret)?;
}
if ctx
.db
.database_migration_operator()
.operator_identity()
.find(&operator_identity)
.is_some()
{
ctx.db
.database_migration_operator()
.operator_identity()
.delete(&operator_identity);
}
ctx.db
.database_migration_operator()
.insert(DatabaseMigrationOperator {
operator_identity,
created_at: ctx.timestamp,
created_by: caller,
note,
});
Ok(())
}
fn require_migration_operator(ctx: &ReducerContext, caller: Identity) -> Result<(), String> {
if ctx
.db
.database_migration_operator()
.operator_identity()
.find(&caller)
.is_some()
{
Ok(())
} else {
Err("当前 identity 未被授权执行数据库迁移".to_string())
}
}
fn require_migration_bootstrap_secret(input: &str) -> Result<(), String> {
let configured_secret = MIGRATION_BOOTSTRAP_SECRET
.map(str::trim)
.filter(|secret| !secret.is_empty())
.ok_or_else(|| "迁移引导密钥未配置,无法创建首个操作员".to_string())?;
if configured_secret.chars().count() < MIGRATION_MIN_BOOTSTRAP_SECRET_LEN {
return Err("迁移引导密钥长度不足,至少需要 16 个字符".to_string());
}
if input != configured_secret {
return Err("迁移引导密钥不正确".to_string());
}
Ok(())
}
fn parse_migration_operator_identity(input: &str) -> Result<Identity, String> {
let identity_hex = input.trim().trim_start_matches("0x");
if identity_hex.len() != 64 {
return Err("operator_identity_hex 必须是 64 位十六进制 identity".to_string());
}
Identity::from_hex(identity_hex)
.map_err(|error| format!("operator_identity_hex 格式不合法: {error}"))
}
fn normalize_migration_operator_note(input: &str) -> Result<String, String> {
let note = input.trim();
if note.chars().count() > MIGRATION_MAX_OPERATOR_NOTE_CHARS {
return Err(format!(
"迁移操作员备注过长,最多 {} 个字符",
MIGRATION_MAX_OPERATOR_NOTE_CHARS
));
}
Ok(note.to_string())
}
fn normalize_include_tables(input: &[String]) -> Result<Option<HashSet<String>>, String> {
if input.is_empty() {
return Ok(None);
}
let mut tables = HashSet::new();
for raw_name in input {
let name = raw_name.trim();
if name.is_empty() {
continue;
}
if name.len() > MIGRATION_MAX_TABLE_NAME_LEN {
return Err(format!("迁移表名过长: {name}"));
}
if !is_supported_migration_table(name) {
return Err(format!("迁移表不在白名单内: {name}"));
}
tables.insert(name.to_string());
}
Ok(Some(tables))
}
fn should_include_table(include_tables: Option<&HashSet<String>>, table_name: &str) -> bool {
include_tables
.map(|tables| tables.contains(table_name))
.unwrap_or(true)
}
fn build_migration_file(
ctx: &ReducerContext,
exported_at_micros: i64,
include_tables: Option<&HashSet<String>>,
) -> Result<MigrationFile, String> {
let mut tables = Vec::new();
collect_all_migration_tables!(ctx, include_tables, tables);
Ok(MigrationFile {
schema_version: MIGRATION_SCHEMA_VERSION,
exported_at_micros,
tables,
})
}
fn build_export_stats(tables: &[MigrationTable]) -> Vec<DatabaseMigrationTableStat> {
tables
.iter()
.map(|table| DatabaseMigrationTableStat {
table_name: table.name.clone(),
exported_row_count: table.rows.len() as u64,
imported_row_count: 0,
skipped_row_count: 0,
})
.collect()
}
fn build_import_dry_run_stats(
tables: &[MigrationTable],
include_tables: Option<&HashSet<String>>,
) -> Result<Vec<DatabaseMigrationTableStat>, String> {
let mut stats = Vec::new();
for table in tables {
if !is_supported_migration_table(&table.name) {
return Err(format!("迁移文件包含不支持的表: {}", table.name));
}
if should_include_table(include_tables, &table.name) {
stats.push(DatabaseMigrationTableStat {
table_name: table.name.clone(),
exported_row_count: 0,
imported_row_count: table.rows.len() as u64,
skipped_row_count: 0,
});
} else {
stats.push(DatabaseMigrationTableStat {
table_name: table.name.clone(),
exported_row_count: 0,
imported_row_count: 0,
skipped_row_count: table.rows.len() as u64,
});
}
}
Ok(stats)
}
fn apply_migration_file(
ctx: &ReducerContext,
migration_file: &MigrationFile,
include_tables: Option<&HashSet<String>>,
replace_existing: bool,
) -> Result<Vec<DatabaseMigrationTableStat>, String> {
let mut stats = Vec::new();
for table in &migration_file.tables {
if !is_supported_migration_table(&table.name) {
return Err(format!("迁移文件包含不支持的表: {}", table.name));
}
}
if replace_existing {
clear_all_migration_tables!(ctx, include_tables);
}
for table in &migration_file.tables {
if !should_include_table(include_tables, &table.name) {
stats.push(DatabaseMigrationTableStat {
table_name: table.name.clone(),
exported_row_count: 0,
imported_row_count: 0,
skipped_row_count: table.rows.len() as u64,
});
continue;
}
let imported_row_count = insert_migration_table_rows(ctx, table)?;
stats.push(DatabaseMigrationTableStat {
table_name: table.name.clone(),
exported_row_count: 0,
imported_row_count,
skipped_row_count: 0,
});
}
Ok(stats)
}
fn row_to_json<T: spacetimedb::Serialize>(row: &T) -> Result<serde_json::Value, String> {
serde_json::to_value(SerializeWrapper::from_ref(row))
.map_err(|error| format!("迁移行序列化失败: {error}"))
}
fn row_from_json<T>(value: &serde_json::Value) -> Result<T, String>
where
T: for<'de> spacetimedb::Deserialize<'de>,
{
let wrapped: DeserializeWrapper<T> = serde_json::from_value(value.clone())
.map_err(|error| format!("迁移行反序列化失败: {error}"))?;
Ok(wrapped.0)
}
fn insert_migration_table_rows(
ctx: &ReducerContext,
table: &MigrationTable,
) -> Result<u64, String> {
macro_rules! insert_table_match_arm {
($($table:ident),+ $(,)?) => {
match table.name.as_str() {
$(
stringify!($table) => {
let mut imported = 0u64;
for value in &table.rows {
let row = row_from_json(value)
.map_err(|error| format!("{}: {error}", stringify!($table)))?;
ctx.db
.$table()
.try_insert(row)
.map_err(|error| format!("{} 导入失败: {error}", stringify!($table)))?;
imported = imported.saturating_add(1);
}
Ok(imported)
}
)+
_ => Err(format!("迁移表不在白名单内: {}", table.name)),
}
};
}
migration_tables!(insert_table_match_arm)
}
fn is_supported_migration_table(table_name: &str) -> bool {
macro_rules! supported_table_match {
($($table:ident),+ $(,)?) => {
matches!(
table_name,
$(stringify!($table))|+
)
};
}
migration_tables!(supported_table_match)
}