ERP Java微服务代码 2026-04-06

This commit is contained in:
gitadmin 2026-04-06 21:16:35 +08:00
commit d4cf2846de
1524 changed files with 150408 additions and 0 deletions

202
DEVELOPMENT.md Normal file
View File

@ -0,0 +1,202 @@
# ERP Java 后端开发指南
## 环境准备
### 1. 开发环境要求
- JDK 17+
- Maven 3.8+
- Docker 20.10+
- IDE: IntelliJ IDEA / VS Code
### 2. 启动基础设施
```bash
# 进入项目根目录
cd /root/.openclaw/workspace/erp-java-backend
# 启动所有基础设施服务
docker-compose up -d
# 查看服务状态
docker-compose ps
# 访问服务
# Nacos控制台: http://localhost:8848/nacos (账号:nacos 密码:nacos)
# RocketMQ控制台: http://localhost:8080
# SkyWalking UI: http://localhost:8081
# MinIO控制台: http://localhost:9001 (账号:minioadmin 密码:minioadmin123)
```
### 3. 初始化数据库
```bash
# 数据库已通过docker-compose自动初始化
# 数据库连接信息:
# - 主机: localhost:3307
# - 数据库: erp_java
# - 用户名: erp_user
# - 密码: erp123456
```
## 项目结构
### 模块说明
```
erp-java-backend/
├── common/ # 公共模块
│ ├── common-core/ # 核心工具类、常量、异常
│ ├── common-web/ # Web相关响应封装、异常处理
│ ├── common-mybatis/ # MyBatis配置
│ └── common-redis/ # Redis配置
├── services/ # 业务服务
│ ├── api-gateway/ # API网关待实现
│ ├── user-service/ # 用户服务(已创建)
│ ├── product-service/ # 商品服务(待实现)
│ ├── order-service/ # 订单服务(待实现)
│ ├── inventory-service/ # 库存服务(待实现)
│ ├── finance-service/ # 财务服务(待实现)
│ ├── admin-service/ # 总控服务(待实现)
│ ├── file-service/ # 文件服务(待实现)
│ └── notification-service/ # 通知服务(待实现)
└── infrastructure/ # 基础设施配置
```
## 开发流程
### 1. 创建新服务
```bash
# 1. 在services目录下创建新服务目录
mkdir -p services/new-service/src/main/java/com/erp/newservice
# 2. 复制user-service的pom.xml并修改artifactId
# 3. 创建Spring Boot主类
# 4. 配置application.yml
# 5. 在父pom.xml中添加模块
```
### 2. 代码规范
- 使用Lombok减少样板代码
- 使用MapStruct进行对象转换
- 使用MyBatis Plus进行数据访问
- 遵循RESTful API设计规范
- 使用Swagger/OpenAPI生成API文档
### 3. 数据库开发
```java
// 实体类示例
@Data
@TableName("table_name")
public class Entity {
@TableId(type = IdType.AUTO)
private Long id;
@TableField(fill = FieldFill.INSERT)
private LocalDateTime createdAt;
}
// Repository接口
@Mapper
public interface EntityRepository extends BaseMapper<Entity> {
// 自定义查询方法
}
// Service实现
@Service
public class EntityServiceImpl extends ServiceImpl<EntityRepository, Entity> {
// 业务逻辑
}
```
## 启动服务
### 1. 编译项目
```bash
# 在项目根目录执行
mvn clean compile
```
### 2. 启动用户服务
```bash
# 进入用户服务目录
cd services/user-service
# 启动服务
mvn spring-boot:run
# 或者直接运行主类
java -jar target/user-service-1.0.0-SNAPSHOT.jar
```
### 3. 验证服务
```bash
# 健康检查
curl http://localhost:8082/user/api/users/health
# 获取用户列表
curl http://localhost:8082/user/api/users
# 创建用户
curl -X POST http://localhost:8082/user/api/users \
-H "Content-Type: application/json" \
-d '{"username":"test","email":"test@example.com","passwordHash":"password123"}'
```
## 部署说明
### 1. 构建Docker镜像
```bash
# 构建所有服务
mvn clean package -DskipTests
docker build -t erp-user-service:latest services/user-service/
```
### 2. 生产环境部署
```yaml
# Kubernetes部署示例
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: erp-user-service:latest
ports:
- containerPort: 8082
```
## 监控与调试
### 1. 日志查看
```bash
# 查看服务日志
tail -f logs/user-service.log
# 查看Docker容器日志
docker-compose logs -f user-service
```
### 2. 性能监控
- SkyWalking: http://localhost:8081 (链路追踪)
- Prometheus: http://localhost:9090 (指标监控)
- Grafana: http://localhost:3000 (数据可视化)
### 3. 调试工具
- Arthas: Java诊断工具
- JConsole: JVM监控
- VisualVM: 性能分析
## 注意事项
1. **服务发现**: 所有服务需要注册到Nacos
2. **配置管理**: 使用Nacos Config管理配置
3. **分布式事务**: 使用Seata处理跨服务事务
4. **消息队列**: 使用RocketMQ进行异步通信
5. **链路追踪**: 集成SkyWalking进行全链路追踪

203
MIGRATION_PLAN.md Normal file
View File

@ -0,0 +1,203 @@
# PHP → Java 迁移计划
## 迁移策略:并行开发,逐步替换
### 原则
1. **业务不中断** - PHP系统继续运行
2. **数据双向同步** - PHP和Java数据保持一致
3. **接口兼容** - 保持API兼容性
4. **渐进迁移** - 模块逐个迁移
---
## 阶段规划
### 阶段1基础架构搭建第1-2周
| 任务 | 状态 | 负责人 |
|------|------|--------|
| ✅ Java项目结构创建 | 已完成 | 丫头 |
| ✅ 基础设施Docker化 | 已完成 | 丫头 |
| ✅ 用户服务基础框架 | 已完成 | 丫头 |
| 🔄 数据库同步方案 | 进行中 | - |
| 🔄 API网关搭建 | 待开始 | - |
### 阶段2核心服务迁移第3-6周
| 服务 | 优先级 | 复杂度 | 预计时间 |
|------|--------|--------|----------|
| 用户服务 | P0 | ⭐⭐ | 1周 |
| 商品服务 | P0 | ⭐⭐ | 1周 |
| 订单服务 | P0 | ⭐⭐⭐⭐ | 2周 |
| 库存服务 | P0 | ⭐⭐⭐ | 1周 |
| 财务服务 | P1 | ⭐⭐⭐⭐ | 2周 |
### 阶段3业务流程验证第7-8周
| 流程 | 验证点 | 状态 |
|------|--------|------|
| 用户注册登录 | 认证流程 | 待验证 |
| 商品管理 | CRUD操作 | 待验证 |
| 订单创建 | 分布式事务 | 待验证 |
| 库存扣减 | 数据一致性 | 待验证 |
### 阶段4全面切换第9-12周
| 任务 | 描述 | 风险 |
|------|------|------|
| 数据迁移 | 历史数据迁移 | 高 |
| 流量切换 | 逐步切流 | 中 |
| 监控完善 | 生产监控 | 低 |
| 回滚方案 | 应急计划 | 中 |
---
## 数据同步方案
### 1. 实时双向同步
```
PHP系统 ←→ 消息队列 ←→ Java系统
```
### 2. 同步内容
| 数据表 | 同步方向 | 同步策略 |
|--------|----------|----------|
| users | 双向 | 实时同步 |
| tenants | 双向 | 实时同步 |
| products | PHP→Java | 批量同步 |
| orders | 双向 | 实时同步 |
| inventory | 双向 | 实时同步 |
### 3. 冲突解决
- **时间戳优先** - 更新时间晚的生效
- **人工干预** - 冲突数据标记待处理
- **业务规则** - 按业务优先级解决
---
## API兼容性设计
### 1. 接口映射
```
PHP API: /api/auth/login
Java API: /user/api/auth/login (保持相同路径)
```
### 2. 响应格式
```json
// PHP响应格式
{
"code": 200,
"message": "success",
"data": {...}
}
// Java响应格式保持兼容
{
"code": 200,
"message": "success",
"data": {...},
"timestamp": "2026-04-04T10:00:00"
}
```
### 3. 错误处理
- 错误码保持一致
- 错误消息格式统一
- 异常处理机制兼容
---
## 部署架构
### 生产环境架构
```
┌─────────────┐
│ 负载均衡器 │
└──────┬──────┘
┌──────────────┼──────────────┐
▼ ▼ ▼
┌────────────┐ ┌────────────┐ ┌────────────┐
│ PHP后端集群 │ │ Java网关层 │ │ 监控告警层 │
└──────┬─────┘ └──────┬─────┘ └──────┬─────┘
│ │ │
┌──────┴──────────────┴──────┬───────┴──────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ MySQL主从│ │ Java微服务│ │ 监控系统 │
└─────────┘ │ 集群 │ └─────────┘
└─────────┘
```
---
## 风险控制
### 技术风险
| 风险 | 概率 | 影响 | 应对措施 |
|------|------|------|----------|
| 分布式事务失败 | 中 | 高 | Seata + Saga模式 |
| 数据不一致 | 中 | 高 | 实时监控 + 补偿机制 |
| 性能下降 | 低 | 中 | 性能测试 + 优化 |
| 服务雪崩 | 低 | 高 | 熔断 + 降级 |
### 业务风险
| 风险 | 概率 | 影响 | 应对措施 |
|------|------|------|----------|
| 业务中断 | 低 | 高 | 灰度发布 + 回滚 |
| 数据丢失 | 低 | 高 | 备份 + 恢复演练 |
| 用户体验下降 | 中 | 中 | A/B测试 + 用户反馈 |
---
## 成功指标
### 技术指标
- ✅ 服务可用性 ≥ 99.9%
- ✅ 接口响应时间 ≤ 200ms
- ✅ 数据一致性 ≥ 99.99%
- ✅ 系统吞吐量 ≥ 1000 TPS
### 业务指标
- ✅ 用户无感知迁移
- ✅ 功能完整迁移
- ✅ 性能提升 ≥ 20%
- ✅ 运维成本降低 ≥ 30%
---
## 团队协作
### 角色分工
| 角色 | 职责 | 人员 |
|------|------|------|
| 架构师 | 架构设计、技术选型 | 丫头 |
| 后端开发 | Java服务开发 | 待分配 |
| 前端开发 | 前端适配 | 待分配 |
| DevOps | 部署运维 | 待分配 |
| 测试 | 质量保障 | 待分配 |
### 沟通机制
- 每日站会15分钟
- 每周迭代评审
- 每月架构评审
- 即时问题沟通
---
## 下一步行动
### 立即行动(本周)
1. [x] 创建Java项目基础结构
2. [x] 搭建开发环境
3. [ ] 实现用户服务完整功能
4. [ ] 设计数据同步方案
5. [ ] 制定详细开发计划
### 下周计划
1. [ ] 完成商品服务开发
2. [ ] 实现PHP-Java数据同步
3. [ ] 搭建API网关
4. [ ] 性能基准测试
---
**最后更新2026-04-04**
**版本v1.0**

37
README.md Normal file
View File

@ -0,0 +1,37 @@
# ERP Java 微服务后端
## 架构概述
基于Spring Cloud Alibaba的微服务架构逐步替换现有PHP系统。
## 技术栈
- **Java 17** + **Spring Boot 3.x**
- **Spring Cloud 2023.x** + **Spring Cloud Alibaba 2023.x**
- **MySQL 8.0** + **Redis 7.x**
- **Nacos 2.x** (服务注册与配置中心)
- **Spring Cloud Gateway** (API网关)
- **Seata** (分布式事务)
- **RocketMQ** (消息队列)
- **MyBatis Plus** (数据访问层)
## 服务划分
### 核心业务服务
1. `user-service` - 用户服务
2. `product-service` - 商品服务
3. `order-service` - 订单服务
4. `inventory-service` - 库存服务
5. `finance-service` - 财务服务
### 支撑服务
6. `admin-service` - 总控服务(租户、套餐管理)
7. `file-service` - 文件服务
8. `notification-service` - 通知服务
### 基础设施
9. `api-gateway` - API网关
10. `auth-service` - 认证服务
## 开发原则
1. **逐步迁移** - 新功能用Java开发旧功能逐步迁移
2. **数据同步** - PHP和Java系统数据双向同步
3. **接口兼容** - 保持API接口兼容性
4. **灰度发布** - 新服务逐步替换旧服务

235
STANDARDS_FIX_REPORT.md Normal file
View File

@ -0,0 +1,235 @@
# ERP微服务架构标准化检查修复报告
**执行时间:** 2026-04-05
**执行分支:** backup-before-standards-check已备份
**工作目录:** /root/.openclaw/workspace/erp-java-backend
---
## 一、已修复的问题
### 1.1 骨架代码 - AuditLogServiceImpl.java ✅
**文件:** `services/approval-flow-service/src/main/java/com/erp/approval/service/impl/AuditLogServiceImpl.java`
**修复内容:**
| 方法 | 原问题 | 修复方案 |
|------|--------|---------|
| `getCurrentTenantId()` (L177-179) | 返回硬编码`0L`,无上下文感知 | 通过反射优先从`TenantContext`获取,失败时`log.warn()`警告并降级返回 |
| `getCurrentUserId()` (L182-184) | 返回硬编码`0L`,无上下文感知 | 通过反射优先从`UserContext`获取,失败时`log.warn()`警告并降级返回 |
| `getCurrentUserName()` (L187-189) | 返回硬编码`"System"`,无上下文感知 | 通过反射优先从`UserContext`获取,失败时`log.warn()`警告并降级返回 |
| `getUserAgent()` (L211-220) | 请求上下文不存在时返回`null` | 改为返回`"unknown"`字符串,避免空指针 |
**修复要点:**
- 使用反射(`Class.forName`)避免跨服务直接依赖,保持微服务解耦
- 反射失败时只记录`trace`级别日志(正常情况)
- 降级返回默认值时记录`warn`级别日志(需要人工关注)
---
### 1.2 骨架代码 - ReportExportServiceImpl.java ✅
**文件:** `services/report-service/src/main/java/com/erp/report/export/service/ReportExportServiceImpl.java`
**修复内容:**
| 方法 | 原问题 | 修复方案 |
|------|--------|---------|
| `queryGoodsData()` (L572) | 静默返回空列表`[]`,无任何提示 | 添加`log.warn()`说明需接入商品服务Feign客户端增加TODO实现步骤注释 |
| `querySuppliersData()` (L636) | 静默返回空列表`[]`,无任何提示 | 添加`log.warn()`说明需接入供应商服务Feign客户端增加TODO实现步骤注释 |
**修复要点:**
- 这些是真实的stub实现不是防御性代码导出功能会无声失败
- 修复后至少会在日志中看到警告,便于发现数据缺失问题
- 需要后续接入对应的Feign客户端才能真正工作
---
### 1.3 总控与租户 - 网关Micrometer计数器 ✅
**新增文件:** `gateway/src/main/java/com/erp/gateway/filter/MetricsFilter.java`
**修复内容:**
- 新增`MetricsFilter`全局过滤器使用Micrometer记录以下指标
- `api.requests.total` - 按方法统计的总请求数
- `api.requests.status` - 按HTTP状态码统计
- `api.requests.service` - 按目标服务统计
- `api.requests.path` - 按路径统计防止高基数路径截断到50字符
- `api.latency` - 全局响应延迟p50/p95/p99百分位
- `api.latency.path` - 按路径统计的响应延迟
**配置文件修改:**
- `gateway/pom.xml` - 添加`micrometer-registry-prometheus`依赖
- `gateway/src/main/resources/application.yml` - 暴露`/actuator/prometheus`和`/actuator/metrics`端点
---
## 二、已检查确认正常的功能
### 2.1 物流自建功能 ✅
| 检查项 | 状态 | 说明 |
|--------|------|------|
| 快递适配器(顺丰/圆通/韵达/中通) | ✅ 已实现 | adapter目录下各物流商适配器 |
| 轨迹表分区按月RANGE | ✅ 脚本已就绪 | `V2__add_partition_and_indexes.sql`存在,需手动执行切换 |
| 异常检测定时任务 | ✅ 已实现 | `LogisticsExceptionDetectJob`含4类检测异常状态/同步失败/停滞/派送超时) |
| 告警去重 | ✅ 已实现 | Redis 24小时去重 |
| 幂等性 | ✅ 已实现 | `LogisticsIdempotencyService` |
**注意:** `V2__add_partition_and_indexes.sql`中的分区表切换RENAME TABLE需要手动在低峰期执行。
---
### 2.2 Nacos动态配置下发租户功能列表 ✅
| 检查项 | 状态 | 说明 |
|--------|------|------|
| 租户配置推送Nacos | ✅ 已实现 | `NacosConfigService.publishTenantPackageConfig()` |
| 功能开关推送Nacos | ✅ 已实现 | `NacosConfigService.publishFeatureFlag()` |
| 审核通过推送配置 | ✅ 已实现 | `approveTenant()`方法中调用`pushPackageChangeToNacos()` |
---
### 2.3 订单状态机 ✅
`OrderStateMachine.java` 已完整实现:
- 5个状态常量pending/auditing/shipped/completed/cancelled
- 状态转换验证`canTransition()`
- 状态执行`transition()`
- 辅助判断方法canAudit/canShip/canComplete/canCancel
---
## 三、待人工复核项(无法自动修复)
### 3.1 消息消费者Stub实现 ⚠️
**OrderMessageConsumer**L80-103
```
handleOrderCreated() - TODO: 调用订单服务处理订单创建
handleOrderUpdated() - TODO: 调用订单服务处理订单更新
handleOrderCancelled() - TODO: 调用订单服务处理订单取消(恢复库存/退款)
handleOrderPaid() - TODO: 调用订单服务处理订单支付
handleOrderShipped() - TODO: 调用物流服务
handleOrderDelivered() - TODO: 确认收货,完成订单流程
```
**FinanceMessageConsumer**L67-96
```
handleInvoiceCreated() - TODO: 调用财务服务处理开票
handleRefundProcessed() - TODO: 调用财务服务处理退款
handleReconciliation() - TODO: 调用财务服务进行对账
handleSettlement() - TODO: 调用财务服务进行结算
```
**InventoryMessageConsumer**L68-89
```
handleInventoryDeduct() - TODO: 调用库存服务扣减库存
handleInventoryRestore() - TODO: 调用库存服务恢复库存
handleInventoryLock() - TODO: 调用库存服务锁定库存
handleInventoryUnlock() - TODO: 调用库存服务解锁库存
handleInventoryAlert() - TODO: 发送库存预警通知
```
**影响:** 这些是RocketMQ消息的最终处理入口当前实现只记录日志消息会被消费但不执行任何业务操作。
---
### 3.2 消息消费者缺少业务级幂等键 ⚠️
**问题:** `BaseRocketMQConsumer`使用`messageExt.getMsgId()`RocketMQ生成的broker消息ID作为消息标识但这不是业务级幂等键。
**建议修复方案:** 在`OrderMessage`/`FinanceMessage`/`InventoryMessage`中提取业务ID如orderId + eventType作为Redis去重键而非使用RocketMQ的msgId。
---
### 3.3 平台同步服务Stub ⚠️
**文件:** `services/platform-sync-service/.../PlatformSyncServiceImpl.java`
```
syncOrderToPlatform() - TODO: 实现各平台API适配器调用实际平台接口
pullOrdersFromPlatform() - TODO: 实现各平台API适配器拉取实际订单数据
```
---
### 3.4 对账服务Stub ⚠️
**文件:** `services/reconciliation-service/.../ReconciliationServiceImpl.java`
```
getStatistics() - TODO: 实现完整的统计逻辑 (L564)
getDifferenceTrend() - TODO: 实现差异趋势统计 (L612)
getWeeklyReconciliation() - TODO: 实现周对账 (L640)
getMonthlyReconciliation() - TODO: 实现月对账 (L646)
fetchPlatformOrders() - TODO: 实现真实的平台订单获取逻辑 (L664)
fetchInternalOrders() - TODO: 实现真实的内部订单获取逻辑 (L674)
```
---
### 3.5 通知服务Stub ⚠️
**文件:** `services/notification-service/.../NotificationServiceImpl.java`
```
sendEmail() - TODO: 实现真实的邮件发送接入SendGrid/阿里云邮件)(L106)
sendSms() - TODO: 实现真实的短信发送(接入阿里云/腾讯云短信)(L125)
```
当前为模拟实现,只记录日志,不实际发送。
---
### 3.6 事务边界问题 ⚠️(需人工确认)
**潜在风险点:** 检查了`OrderServiceImpl`中的`@Transactional`方法未发现Feign调用被事务包裹的情况。但建议在以下场景中确认
- 所有涉及跨服务调用的方法
- 任何在`@Transactional`方法内部的`FeignClient`调用
建议:使用`TransactionTemplate`编程式事务将Feign调用放在事务外。
---
## 四、部署标准化清单
### 4.1 立即执行(代码已修复)
| 序号 | 操作 | 文件 |
|------|------|------|
| 1 | 重新编译approval-flow-service | AuditLogServiceImpl.java |
| 2 | 重新编译report-service | ReportExportServiceImpl.java |
| 3 | 重新编译gateway | MetricsFilter.java, application.yml, pom.xml |
| 4 | 部署后验证日志 | 检查是否有"无法获取当前租户上下文"警告 |
### 4.2 人工执行(需停机窗口)
| 序号 | 操作 | 说明 |
|------|------|------|
| 1 | 执行物流轨迹表分区切换 | `V2__add_partition_and_indexes.sql`中的RENAME TABLE步骤 |
| 2 | Prometheus配置 | 确认`/actuator/prometheus`端点可访问 |
| 3 | 接入商品/供应商服务Feign客户端 | 修复ReportExportServiceImpl的stub |
| 4 | 接入消息消费者真实实现 | Order/Finance/Inventory MessageConsumer |
| 5 | 接入邮件/短信服务商 | NotificationServiceImpl |
| 6 | 接入平台API适配器 | PlatformSyncServiceImpl |
---
## 五、文件变更摘要
```
gateway/pom.xml | +8 行
gateway/src/main/resources/application.yml | +10 行
.../approval/service/impl/AuditLogServiceImpl.java | ±47 行重构3个stub方法+修复getUserAgent
.../export/service/ReportExportServiceImpl.java | +14 行2个stub方法增加warn日志
gateway/src/main/java/com/erp/gateway/filter/MetricsFilter.java | 新增(+185行
```
---
## 六、结论
**已修复4项** - 骨架代码stub、日志完善、网关指标收集
**已确认正常3项** - 物流功能、Nacos配置、订单状态机
⚠️ **待人工复核6项** - 消息消费者stub、平台同步、对账服务、通知服务、事务边界、物流分区切换
**建议优先处理:** 消息消费者stub实现直接影响业务和物流轨迹表分区切换数据增长后会遇到性能问题

View File

@ -0,0 +1,74 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.erp</groupId>
<artifactId>erp-java-backend</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath>
</parent>
<artifactId>common-config</artifactId>
<name>Common Config</name>
<description>公共配置模块 - OpenAPI、Jackson、Redis等通用配置</description>
<dependencies>
<!-- Spring Boot Web (for OpenAPI) -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- Swagger / OpenAPI 3 -->
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
<version>2.3.0</version>
</dependency>
<!-- Jackson -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.datatype</groupId>
<artifactId>jackson-datatype-jsr310</artifactId>
</dependency>
<!-- Redis -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<!-- 公共核心模块 -->
<dependency>
<groupId>com.erp</groupId>
<artifactId>common-core</artifactId>
<version>${project.version}</version>
</dependency>
<!-- Lombok -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build>
</project>

View File

@ -0,0 +1,28 @@
package com.erp.common.config;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
/**
* 统一 Jackson 配置
*
* 配置内容
* - 注册 Java 8 时间模块LocalDateTime 序列化为 ISO-8601 字符串
* - 禁用日期以时间戳形式写入
*/
@Configuration
public class JacksonConfig {
@Bean
@Primary
public ObjectMapper objectMapper() {
ObjectMapper mapper = new ObjectMapper();
mapper.registerModule(new JavaTimeModule());
mapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
return mapper;
}
}

View File

@ -0,0 +1,71 @@
package com.erp.common.config;
import io.swagger.v3.oas.models.OpenAPI;
import io.swagger.v3.oas.models.info.Contact;
import io.swagger.v3.oas.models.info.Info;
import io.swagger.v3.oas.models.info.License;
import io.swagger.v3.oas.models.servers.Server;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.util.List;
/**
* 统一 OpenAPI 3.0 配置
*
* 使用方式在业务服务的 application.yml 中配置
* erp:
* openapi:
* title: xxx-service API
* description: xxx服务API文档
* service-url: https://api.erpzbbh.cn
*
* 访问 Swagger UI: /swagger-ui.html
* 访问 Knife4j: /doc.html
*/
@Configuration
public class OpenApiConfig {
@Value("${erp.openapi.title:ERP Service API}")
private String title;
@Value("${erp.openapi.description:ERP Service API Documentation}")
private String description;
@Value("${erp.openapi.service-url:https://api.erpzbbh.cn}")
private String serviceUrl;
@Value("${erp.openapi.version:1.0.0}")
private String version;
@Bean
public OpenAPI erpOpenAPI() {
Server devServer = new Server();
devServer.setUrl("http://localhost:{port}");
devServer.setDescription("Development Server");
Server prodServer = new Server();
prodServer.setUrl(serviceUrl);
prodServer.setDescription("Production Server");
Contact contact = new Contact();
contact.setName("ERP Team");
contact.setEmail("dev@erpzbbh.cn");
License license = new License()
.name("Private License")
.url("https://erpzbbh.cn/license");
Info info = new Info()
.title(title)
.version(version)
.description(description)
.contact(contact)
.license(license);
return new OpenAPI()
.info(info)
.servers(List.of(devServer, prodServer));
}
}

View File

@ -0,0 +1,72 @@
package com.erp.common.config;
import com.fasterxml.jackson.annotation.JsonAutoDetect;
import com.fasterxml.jackson.annotation.PropertyAccessor;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;
/**
* 统一 Redis 配置
*
* 提供三种序列化方式的 RedisTemplate
* - redisTemplate: Jackson2JsonRedisSerializer完整JSON推荐用于复杂对象
* - stringRedisTemplate: StringRedisSerializer纯字符串推荐用于简单KV
*
* 使用场景
* - 缓存复杂对象 redisTemplate
* - 缓存简单字符串/JSON stringRedisTemplate
* - 需要带类型信息的反序列化 GenericJackson2JsonRedisSerializer已废弃不推荐
*/
@Configuration
public class RedisConfig {
/**
* JSON序列化RedisTemplate推荐
*/
@Bean
public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory connectionFactory) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
objectMapper.activateDefaultTyping(objectMapper.getPolymorphicTypeValidator(), ObjectMapper.DefaultTyping.NON_FINAL);
objectMapper.registerModule(new JavaTimeModule());
objectMapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
Jackson2JsonRedisSerializer<Object> jsonSerializer = new Jackson2JsonRedisSerializer<>(objectMapper, Object.class);
StringRedisSerializer stringSerializer = new StringRedisSerializer();
template.setKeySerializer(stringSerializer);
template.setHashKeySerializer(stringSerializer);
template.setValueSerializer(jsonSerializer);
template.setHashValueSerializer(jsonSerializer);
template.afterPropertiesSet();
return template;
}
/**
* 字符串RedisTemplate用于简单字符串缓存
*/
@Bean
public StringRedisTemplate stringRedisTemplate(RedisConnectionFactory connectionFactory) {
StringRedisTemplate template = new StringRedisTemplate();
template.setConnectionFactory(connectionFactory);
template.setKeySerializer(new StringRedisSerializer());
template.setValueSerializer(new StringRedisSerializer());
template.setHashKeySerializer(new StringRedisSerializer());
template.setHashValueSerializer(new StringRedisSerializer());
template.afterPropertiesSet();
return template;
}
}

View File

@ -0,0 +1,56 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.erp</groupId>
<artifactId>erp-java-backend</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath>
</parent>
<artifactId>common-core</artifactId>
<name>Common Core</name>
<description>公共核心模块 - 工具类、常量、异常等</description>
<dependencies>
<!-- Spring Boot -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<!-- 工具类 -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<!-- JSON处理 -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency>
<!-- 验证 -->
<dependency>
<groupId>jakarta.validation</groupId>
<artifactId>jakarta.validation-api</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build>
</project>

View File

@ -0,0 +1,25 @@
package com.erp.common.core.event;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.math.BigDecimal;
import java.time.LocalDateTime;
/**
* 订单取消/退款事件 - 订单取消或退款时发布
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class OrderCancelledEvent {
private Long orderId;
private String orderNo;
private String shortId;
private String platform;
private Long shopId;
private BigDecimal refundAmount;
private String refundReason;
private String eventType; // cancel / refund
private LocalDateTime eventTime;
}

View File

@ -0,0 +1,25 @@
package com.erp.common.core.event;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.math.BigDecimal;
import java.time.LocalDateTime;
/**
* 订单创建事件 - 订单创建时发布
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class OrderCreatedEvent {
private Long orderId;
private String orderNo;
private String shortId;
private String platform;
private Long shopId;
private BigDecimal totalAmount;
private String paymentMethod;
private LocalDateTime orderTime;
private String operator;
}

View File

@ -0,0 +1,24 @@
package com.erp.common.core.event;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.math.BigDecimal;
import java.time.LocalDateTime;
/**
* 订单支付事件 - 订单支付成功时发布
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class OrderPaidEvent {
private Long orderId;
private String orderNo;
private String shortId;
private String platform;
private Long shopId;
private BigDecimal paidAmount;
private String paymentMethod;
private LocalDateTime paymentTime;
}

View File

@ -0,0 +1,27 @@
package com.erp.common.core.event;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.math.BigDecimal;
import java.time.LocalDateTime;
/**
* 订单发货事件 - 订单发货时发布驱动财务记账
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class OrderShippedEvent {
private Long orderId;
private String orderNo;
private String shortId;
private String platform;
private Long shopId;
private String expressCompany;
private String expressNo;
private BigDecimal goodsAmount;
private BigDecimal freight;
private BigDecimal totalAmount;
private LocalDateTime shipTime;
}

View File

@ -0,0 +1,22 @@
package com.erp.common.core.event;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.time.LocalDateTime;
/**
* 库存变动确认事件 - 库存实际出库/入库完成后发布
* 与com.erp.inventory.event.StockChangedEvent区分此为应用层事件
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class StockChangedApplicationEvent {
private String eventType; // outbound / inbound / adjust
private String skuCode;
private Long warehouseId;
private Integer quantity;
private String relatedNo; // 关联单号订单号/采购单号等
private LocalDateTime eventTime;
}

View File

@ -0,0 +1,27 @@
package com.erp.common.core.exception;
import lombok.Getter;
/**
* 业务异常基类
*/
@Getter
public class BusinessException extends RuntimeException {
private final Integer code;
public BusinessException(String message) {
super(message);
this.code = 500;
}
public BusinessException(Integer code, String message) {
super(message);
this.code = code;
}
public BusinessException(String message, Throwable cause) {
super(message, cause);
this.code = 500;
}
}

View File

@ -0,0 +1,150 @@
package com.erp.common.core.model;
import com.fasterxml.jackson.annotation.JsonInclude;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.time.LocalDateTime;
/**
* 统一API响应格式
*/
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
@JsonInclude(JsonInclude.Include.NON_NULL)
public class ApiResponse<T> {
/**
* 响应码200-成功400-参数错误401-未认证403-禁止404-未找到500-服务器错误
*/
@Builder.Default
private Integer code = 200;
/**
* 响应消息
*/
private String message;
/**
* 响应数据
*/
private T data;
/**
* 请求ID用于日志追踪
*/
private String requestId;
/**
* 响应时间
*/
@Builder.Default
private LocalDateTime timestamp = LocalDateTime.now();
// ==================== 静态工厂方法 ====================
/**
* 成功响应无数据
*/
public static <T> ApiResponse<T> success() {
return ApiResponse.<T>builder()
.code(200)
.message("操作成功")
.timestamp(LocalDateTime.now())
.build();
}
/**
* 成功响应带数据
*/
public static <T> ApiResponse<T> success(T data) {
return ApiResponse.<T>builder()
.code(200)
.message("操作成功")
.data(data)
.timestamp(LocalDateTime.now())
.build();
}
/**
* 成功响应带自定义消息
*/
public static <T> ApiResponse<T> success(String message, T data) {
return ApiResponse.<T>builder()
.code(200)
.message(message)
.data(data)
.timestamp(LocalDateTime.now())
.build();
}
/**
* 失败响应
*/
public static <T> ApiResponse<T> error(String message) {
return ApiResponse.<T>builder()
.code(500)
.message(message)
.timestamp(LocalDateTime.now())
.build();
}
/**
* 失败响应带错误码
*/
public static <T> ApiResponse<T> error(Integer code, String message) {
return ApiResponse.<T>builder()
.code(code)
.message(message)
.timestamp(LocalDateTime.now())
.build();
}
/**
* 参数错误
*/
public static <T> ApiResponse<T> badRequest(String message) {
return ApiResponse.<T>builder()
.code(400)
.message(message)
.timestamp(LocalDateTime.now())
.build();
}
/**
* 未认证
*/
public static <T> ApiResponse<T> unauthorized(String message) {
return ApiResponse.<T>builder()
.code(401)
.message(message)
.timestamp(LocalDateTime.now())
.build();
}
/**
* 禁止访问
*/
public static <T> ApiResponse<T> forbidden(String message) {
return ApiResponse.<T>builder()
.code(403)
.message(message)
.timestamp(LocalDateTime.now())
.build();
}
/**
* 资源未找到
*/
public static <T> ApiResponse<T> notFound(String message) {
return ApiResponse.<T>builder()
.code(404)
.message(message)
.timestamp(LocalDateTime.now())
.build();
}
}

View File

@ -0,0 +1,100 @@
package com.erp.common.core.model;
import com.fasterxml.jackson.annotation.JsonInclude;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* 分页响应封装
*/
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
@JsonInclude(JsonInclude.Include.NON_NULL)
public class PageResponse<T> {
/**
* 当前页数据列表
*/
private List<T> list;
/**
* 总记录数
*/
private Long total;
/**
* 当前页码
*/
private Long page;
/**
* 每页记录数
*/
private Long pageSize;
/**
* 总页数
*/
private Long totalPages;
/**
* 是否有下一页
*/
private Boolean hasNext;
/**
* 是否有上一页
*/
private Boolean hasPrevious;
/**
* 从第几条开始
*/
private Long offset;
public Boolean getHasNext() {
if (total == null || page == null || pageSize == null) {
return null;
}
return page * pageSize < total;
}
public Boolean getHasPrevious() {
if (page == null) {
return null;
}
return page > 1;
}
public Long getOffset() {
if (page == null || pageSize == null) {
return null;
}
return (page - 1) * pageSize;
}
/**
* 分页响应工厂方法
*/
public static <T> PageResponse<T> of(List<T> list, Long total, Integer page, Integer pageSize) {
Long totalPages = pageSize != null && pageSize > 0 ? (total + pageSize - 1) / pageSize : 0L;
Long pageLong = page != null ? page.longValue() : null;
Long pageSizeLong = pageSize != null ? pageSize.longValue() : null;
return PageResponse.<T>builder()
.list(list)
.total(total)
.page(pageLong)
.pageSize(pageSizeLong)
.totalPages(totalPages)
.hasNext(pageLong != null && pageSizeLong != null && pageLong * pageSizeLong < total)
.hasPrevious(pageLong != null && pageLong > 1)
.offset(pageLong != null && pageSizeLong != null ? (pageLong - 1) * pageSizeLong : 0L)
.build();
}
}

View File

@ -0,0 +1,29 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.erp</groupId>
<artifactId>erp-java-backend</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath>
</parent>
<artifactId>common-mybatis</artifactId>
<name>Common MyBatis</name>
<description>Common MyBatis utilities and extensions</description>
<dependencies>
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-boot-starter</artifactId>
<version>${mybatis-plus.version}</version>
</dependency>
<dependency>
<groupId>com.erp</groupId>
<artifactId>common-core</artifactId>
<version>${project.version}</version>
</dependency>
</dependencies>
</project>

View File

@ -0,0 +1,28 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.erp</groupId>
<artifactId>erp-java-backend</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath>
</parent>
<artifactId>common-redis</artifactId>
<name>Common Redis</name>
<description>Common Redis utilities and configurations</description>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>com.erp</groupId>
<artifactId>common-core</artifactId>
<version>${project.version}</version>
</dependency>
</dependencies>
</project>

63
common/common-web/pom.xml Normal file
View File

@ -0,0 +1,63 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.erp</groupId>
<artifactId>erp-java-backend</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath>
</parent>
<artifactId>common-web</artifactId>
<name>Common Web</name>
<description>Web公共模块 - 响应封装、异常处理、拦截器等</description>
<dependencies>
<!-- Spring Boot -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- 公共核心模块 -->
<dependency>
<groupId>com.erp</groupId>
<artifactId>common-core</artifactId>
<version>${project.version}</version>
</dependency>
<!-- 工具类 -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<!-- 验证 -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
<!-- OpenFeign -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build>
</project>

View File

@ -0,0 +1,52 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.erp</groupId>
<artifactId>erp-java-backend</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath>
</parent>
<artifactId>common-web</artifactId>
<name>Common Web</name>
<description>Web公共模块 - 响应封装、异常处理、拦截器等</description>
<dependencies>
<!-- Spring Boot -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- 公共核心模块 -->
<dependency>
<groupId>com.erp</groupId>
<artifactId>common-core</artifactId>
<version>${project.version}</version>
</dependency>
<!-- 工具类 -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<!-- 验证 -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
<!-- OpenFeign -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
</dependencies>
</project>

View File

@ -0,0 +1,14 @@
package com.erp.common.web.dto;
import lombok.Data;
import java.math.BigDecimal;
/**
* 创建订单请求
*/
@Data
public class CreateOrderRequest {
private Long userId;
private BigDecimal totalAmount;
private String remark;
}

View File

@ -0,0 +1,18 @@
package com.erp.common.web.dto;
import lombok.Data;
import java.math.BigDecimal;
import java.time.LocalDateTime;
/**
* 订单数据传输对象
*/
@Data
public class OrderDTO {
private Long id;
private String orderNo;
private Long userId;
private BigDecimal totalAmount;
private String status;
private LocalDateTime createTime;
}

View File

@ -0,0 +1,15 @@
package com.erp.common.web.dto;
import lombok.Data;
import java.util.List;
/**
* 分页结果
*/
@Data
public class PagedResult<T> {
private List<T> records;
private long total;
private int page;
private int size;
}

View File

@ -0,0 +1,28 @@
package com.erp.common.web.dto;
import lombok.Data;
/**
* 通用响应结果
*/
@Data
public class Result<T> {
private int code;
private String message;
private T data;
public static <T> Result<T> success(T data) {
Result<T> result = new Result<>();
result.setCode(200);
result.setMessage("success");
result.setData(data);
return result;
}
public static <T> Result<T> fail(String message) {
Result<T> result = new Result<>();
result.setCode(500);
result.setMessage(message);
return result;
}
}

View File

@ -0,0 +1,129 @@
package com.erp.common.web.feign;
import com.erp.common.web.dto.Result;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.*;
import java.math.BigDecimal;
import java.time.LocalDateTime;
import java.util.List;
/**
* 订单服务Feign客户端
* 示例用户服务调用订单服务
*/
@FeignClient(name = "order-service", path = "/api/orders")
public interface OrderServiceClient {
/**
* 创建订单
*/
@PostMapping
Result<OrderDTO> createOrder(@RequestBody CreateOrderRequest request);
/**
* 获取用户订单列表
*/
@GetMapping("/user/{userId}")
Result<PagedResult<OrderDTO>> getUserOrders(
@PathVariable("userId") Long userId,
@RequestParam(value = "page", defaultValue = "1") Integer page,
@RequestParam(value = "size", defaultValue = "20") Integer size,
@RequestParam(value = "status", required = false) String status
);
/**
* 获取订单详情
*/
@GetMapping("/{orderId}")
Result<OrderDTO> getOrderById(@PathVariable("orderId") Long orderId);
/**
* 取消订单
*/
@PostMapping("/{orderId}/cancel")
Result<Void> cancelOrder(@PathVariable("orderId") Long orderId);
/**
* 订单统计
*/
@GetMapping("/user/{userId}/statistics")
Result<OrderStatisticsDTO> getUserOrderStatistics(@PathVariable("userId") Long userId);
}
/**
* 创建订单请求
*/
class CreateOrderRequest {
private Long userId;
private List<OrderItemRequest> items;
private String shippingAddress;
private String remark;
// getters and setters
}
/**
* 订单项请求
*/
class OrderItemRequest {
private Long productId;
private Integer quantity;
private BigDecimal price;
// getters and setters
}
/**
* 订单DTO
*/
class OrderDTO {
private Long id;
private String orderNo;
private Long userId;
private BigDecimal totalAmount;
private String status;
private LocalDateTime createdAt;
private List<OrderItemDTO> items;
// getters and setters
}
/**
* 订单项DTO
*/
class OrderItemDTO {
private Long productId;
private String productName;
private Integer quantity;
private BigDecimal price;
private BigDecimal subtotal;
// getters and setters
}
/**
* 订单统计DTO
*/
class OrderStatisticsDTO {
private Integer totalOrders;
private BigDecimal totalAmount;
private Integer pendingOrders;
private Integer completedOrders;
private Integer canceledOrders;
// getters and setters
}
/**
* 分页结果
*/
class PagedResult<T> {
private List<T> records;
private Long total;
private Integer size;
private Integer current;
private Integer pages;
// getters and setters
}

19
deploy/sql/init.sh Normal file
View File

@ -0,0 +1,19 @@
#!/bin/bash
# ERP Java 数据库初始化脚本
# 用法: bash init.sh
DB_HOST="111.229.80.149"
DB_USER="root"
DB_PASS="nihao588+"
DB_NAME="erp_java"
echo "=== 创建数据库 ==="
mysql -h $DB_HOST -u $DB_USER -p$DB_PASS -e "DROP DATABASE IF EXISTS $DB_NAME; CREATE DATABASE $DB_NAME DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;" 2>/dev/null
echo "=== 执行建库SQL ==="
SQL_DIR="$(dirname "$0")"
mysql -h $DB_HOST -u $DB_USER -p$DB_PASS --force $DB_NAME < "$SQL_DIR/merge.sql" 2>&1 | grep -v "Warning" | grep -E "^ERROR" || true
echo "=== 建库完成 ==="
mysql -h $DB_HOST -u $DB_USER -p$DB_PASS -e "USE $DB_NAME; SHOW TABLES;" 2>/dev/null | wc -l
echo "个表已创建"

2045
deploy/sql/merge.sql Normal file

File diff suppressed because it is too large Load Diff

683
docker-compose.yml Normal file
View File

@ -0,0 +1,683 @@
version: '3.8'
services:
mysql:
image: mysql:8.0
container_name: erp-mysql
environment:
MYSQL_ROOT_PASSWORD: root123456
MYSQL_DATABASE: erp_java
MYSQL_USER: erp_user
MYSQL_PASSWORD: erp123456
ports:
- 3307:3306
volumes:
- mysql_data:/var/lib/mysql
- ./infrastructure/mysql/init.sql:/docker-entrypoint-initdb.d/init.sql
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
--max_connections=1000
networks:
- erp-network
healthcheck:
test:
- CMD
- mysqladmin
- ping
- -h
- localhost
- -uroot
- -proot123456
interval: 10s
timeout: 5s
retries: 10
start_period: 30s
redis:
image: redis:7-alpine
container_name: erp-redis
ports:
- 6379:6379
volumes:
- redis_data:/data
command: redis-server --appendonly yes --requirepass redis123456 --maxmemory 512mb
--maxmemory-policy allkeys-lru
networks:
- erp-network
healthcheck:
test:
- CMD
- redis-cli
- -a
- redis123456
- ping
interval: 10s
timeout: 5s
retries: 5
nacos:
image: nacos/nacos-server:v2.2.3
container_name: erp-nacos
environment:
MODE: standalone
SPRING_DATASOURCE_PLATFORM: mysql
MYSQL_SERVICE_HOST: mysql
MYSQL_SERVICE_DB_NAME: nacos_config
MYSQL_SERVICE_PORT: 3306
MYSQL_SERVICE_USER: root
MYSQL_SERVICE_PASSWORD: root123456
NACOS_AUTH_ENABLE: 'true'
NACOS_AUTH_TOKEN: SecretKey012345678901234567890123456789012345678901234567890123456789
NACOS_AUTH_IDENTITY_KEY: serverIdentity
NACOS_AUTH_IDENTITY_VALUE: security
ports:
- 8848:8848
- 9848:9848
- 9849:9849
volumes:
- nacos_logs:/home/nacos/logs
- nacos_data:/home/nacos/data
depends_on:
mysql:
condition: service_healthy
networks:
- erp-network
healthcheck:
test:
- CMD
- curl
- -f
- http://localhost:8848/nacos/
interval: 15s
timeout: 10s
retries: 5
start_period: 60s
rocketmq:
image: apache/rocketmq:5.1.4
container_name: erp-rocketmq
ports:
- 9876:9876
- 10909:10909
- 10911:10911
- 10912:10912
environment:
NAMESRV_ADDR: localhost:9876
JAVA_OPTS: -Xms1g -Xmx1g
volumes:
- rocketmq_logs:/home/rocketmq/logs
- rocketmq_store:/home/rocketmq/store
networks:
- erp-network
command: sh mqnamesrv
healthcheck:
test:
- CMD
- sh
- /usr/sbin/rocketmq-ce
- namesrv
- -n
- localhost:9876
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
rocketmq-broker:
image: apache/rocketmq:5.1.4
container_name: erp-rocketmq-broker
ports:
- 10911:10911
environment:
NAMESRV_ADDR: rocketmq:9876
JAVA_OPTS: -Xms1g -Xmx1g -Xmn512m
volumes:
- rocketmq_store:/home/rocketmq/store
depends_on:
- rocketmq
networks:
- erp-network
command: sh mqbroker -n rocketmq:9876 -c ../conf/broker.conf
rocketmq-console:
image: apacherocketmq/rocketmq-dashboard:latest
container_name: erp-rocketmq-console
ports:
- 8080:8080
environment:
JAVA_OPTS: -Drocketmq.namesrv.addr=rocketmq:9876 -Dserver.port=8080
depends_on:
- rocketmq
networks:
- erp-network
seata:
image: seataio/seata-server:1.7.1
container_name: erp-seata
ports:
- 8091:8091
- 7091:7091
environment:
SEATA_IP: seata
SEATA_PORT: '8091'
STORE_MODE: db
SERVER_NODE: '1'
volumes:
- ./infrastructure/seata/registry.conf:/seata-server/resources/registry.conf
- ./infrastructure/seata/file.conf:/seata-server/resources/file.conf
depends_on:
mysql:
condition: service_healthy
nacos:
condition: service_started
networks:
- erp-network
skywalking-oap:
image: apache/skywalking-oap-server:9.7.0
container_name: erp-skywalking-oap
ports:
- 11800:11800
- 12800:12800
environment:
SW_STORAGE: elasticsearch7
SW_STORAGE_ES_CLUSTER_NODES: elasticsearch:9200
depends_on:
elasticsearch:
condition: service_healthy
networks:
- erp-network
skywalking-ui:
image: apache/skywalking-ui:9.7.0
container_name: erp-skywalking-ui
ports:
- 8081:8080
environment:
SW_OAP_ADDRESS: skywalking-oap:12800
depends_on:
- skywalking-oap
networks:
- erp-network
elasticsearch:
image: elasticsearch:7.17.16
container_name: erp-elasticsearch
environment:
discovery.type: single-node
ES_JAVA_OPTS: -Xms512m -Xmx512m
xpack.security.enabled: 'false'
ports:
- 9200:9200
- 9300:9300
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- erp-network
healthcheck:
test:
- CMD
- curl
- -f
- http://localhost:9200/_cluster/health
interval: 15s
timeout: 10s
retries: 5
start_period: 60s
minio:
image: minio/minio:latest
container_name: erp-minio
ports:
- 9000:9000
- 9001:9001
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin123
volumes:
- minio_data:/data
command: server /data --console-address ":9001"
networks:
- erp-network
healthcheck:
test:
- CMD
- curl
- -f
- http://localhost:9000/minio/health/live
interval: 15s
timeout: 10s
retries: 3
start_period: 30s
gateway:
build:
context: ./gateway
dockerfile: docker/Dockerfile
container_name: erp-gateway
ports:
- 8080:8080
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- SPRING_CLOUD_NACOS_CONFIG_ENABLED=false
- SPRING_CLOUD_NACOS_DISCOVERY_ENABLED=false
- SPRING_CLOUD_CONFIG_ENABLED=false
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JWT_SECRET=your-256-bit-secret-key-for-jwt-token-generation-erp
- JAVA_OPTS=-Xms256m -Xmx512m -XX:+UseG1GC
volumes:
- gateway_logs:/app/logs
networks:
- erp-network
healthcheck:
test:
- CMD
- wget
- --no-verbose
- --tries=1
- --spider
- http://localhost:8080/actuator/health
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
nacos:
condition: service_healthy
redis:
condition: service_healthy
user-service:
build:
context: .
dockerfile: services/user-service/Dockerfile
container_name: erp-user-service
ports:
- 8082:8082
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- SPRING_CLOUD_NACOS_CONFIG_ENABLED=false
- SPRING_CLOUD_NACOS_DISCOVERY_ENABLED=false
- SPRING_CLOUD_CONFIG_ENABLED=false
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- user_logs:/app/logs
networks:
- erp-network
healthcheck:
test:
- CMD
- wget
- --no-verbose
- --tries=1
- --spider
- http://localhost:8082/actuator/health
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
admin-service:
build:
context: .
dockerfile: services/admin-service/Dockerfile
container_name: erp-admin-service
ports:
- 8081:8081
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- SPRING_CLOUD_NACOS_CONFIG_ENABLED=false
- SPRING_CLOUD_NACOS_DISCOVERY_ENABLED=false
- SPRING_CLOUD_CONFIG_ENABLED=false
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- admin_logs:/app/logs
networks:
- erp-network
healthcheck:
test:
- CMD
- wget
- --no-verbose
- --tries=1
- --spider
- http://localhost:8081/actuator/health
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
product-service:
build:
context: .
dockerfile: services/product-service/Dockerfile
container_name: erp-product-service
ports:
- 8083:8083
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- SPRING_CLOUD_NACOS_CONFIG_ENABLED=false
- SPRING_CLOUD_NACOS_DISCOVERY_ENABLED=false
- SPRING_CLOUD_CONFIG_ENABLED=false
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- product_logs:/app/logs
networks:
- erp-network
healthcheck:
test:
- CMD
- wget
- --no-verbose
- --tries=1
- --spider
- http://localhost:8083/actuator/health
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
order-service:
build:
context: .
dockerfile: services/order-service/Dockerfile
container_name: erp-order-service
ports:
- 8084:8082
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- SPRING_CLOUD_NACOS_CONFIG_ENABLED=false
- SPRING_CLOUD_NACOS_DISCOVERY_ENABLED=false
- SPRING_CLOUD_CONFIG_ENABLED=false
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- ORDER_SERVICE_PORT=8082
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- order_logs:/app/logs
networks:
- erp-network
healthcheck:
test:
- CMD
- wget
- --no-verbose
- --tries=1
- --spider
- http://localhost:8082/actuator/health
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
inventory-service:
build:
context: .
dockerfile: services/inventory-service/Dockerfile
container_name: erp-inventory-service
ports:
- 8085:8084
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- SPRING_CLOUD_NACOS_CONFIG_ENABLED=false
- SPRING_CLOUD_NACOS_DISCOVERY_ENABLED=false
- SPRING_CLOUD_CONFIG_ENABLED=false
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- inventory_logs:/app/logs
networks:
- erp-network
healthcheck:
test:
- CMD
- wget
- --no-verbose
- --tries=1
- --spider
- http://localhost:8084/actuator/health
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
tenant-service:
build:
context: .
dockerfile: services/tenant-service/Dockerfile
container_name: erp-tenant-service
ports:
- 8086:8083
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- SPRING_CLOUD_NACOS_CONFIG_ENABLED=false
- SPRING_CLOUD_NACOS_DISCOVERY_ENABLED=false
- SPRING_CLOUD_CONFIG_ENABLED=false
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- tenant_logs:/app/logs
networks:
- erp-network
healthcheck:
test:
- CMD
- wget
- --no-verbose
- --tries=1
- --spider
- http://localhost:8083/actuator/health
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
file-service:
build:
context: .
dockerfile: services/file-service/Dockerfile
container_name: erp-file-service
ports:
- 8090:8082
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- SPRING_CLOUD_NACOS_CONFIG_ENABLED=false
- SPRING_CLOUD_NACOS_DISCOVERY_ENABLED=false
- SPRING_CLOUD_CONFIG_ENABLED=false
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- MINIO_ENDPOINT=http://minio:9000
- MINIO_ACCESS_KEY=minioadmin
- MINIO_SECRET_KEY=minioadmin123
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- file_logs:/app/logs
- file_storage:/app/storage
networks:
- erp-network
healthcheck:
test:
- CMD
- wget
- --no-verbose
- --tries=1
- --spider
- http://localhost:8082/actuator/health
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
minio:
condition: service_healthy
nacos:
condition: service_healthy
scheduled-task-service:
build:
context: .
dockerfile: services/scheduled-task-service/Dockerfile
container_name: erp-scheduled-task-service
ports:
- 8091:8088
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- SPRING_CLOUD_NACOS_CONFIG_ENABLED=false
- SPRING_CLOUD_NACOS_DISCOVERY_ENABLED=false
- SPRING_CLOUD_CONFIG_ENABLED=false
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- task_logs:/app/logs
networks:
- erp-network
healthcheck:
test:
- CMD
- wget
- --no-verbose
- --tries=1
- --spider
- http://localhost:8088/actuator/health
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
networks:
erp-network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
volumes:
mysql_data: null
redis_data: null
nacos_logs: null
nacos_data: null
rocketmq_logs: null
rocketmq_store: null
elasticsearch_data: null
minio_data: null
gateway_logs: null
user_logs: null
admin_logs: null
product_logs: null
order_logs: null
inventory_logs: null
tenant_logs: null
file_logs: null
file_storage: null
task_logs: null

651
docker-compose.yml.bak Normal file
View File

@ -0,0 +1,651 @@
version: '3.8'
services:
# ============================================================
# 基础设施服务
# ============================================================
# MySQL数据库
mysql:
image: mysql:8.0
container_name: erp-mysql
environment:
MYSQL_ROOT_PASSWORD: root123456
MYSQL_DATABASE: erp_java
MYSQL_USER: erp_user
MYSQL_PASSWORD: erp123456
ports:
- "3307:3306"
volumes:
- mysql_data:/var/lib/mysql
- ./infrastructure/mysql/init.sql:/docker-entrypoint-initdb.d/init.sql
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --max_connections=1000
networks:
- erp-network
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-uroot", "-proot123456"]
interval: 10s
timeout: 5s
retries: 10
start_period: 30s
# Redis缓存
redis:
image: redis:7-alpine
container_name: erp-redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
command: redis-server --appendonly yes --requirepass redis123456 --maxmemory 512mb --maxmemory-policy allkeys-lru
networks:
- erp-network
healthcheck:
test: ["CMD", "redis-cli", "-a", "redis123456", "ping"]
interval: 10s
timeout: 5s
retries: 5
# Nacos服务注册与配置中心
nacos:
image: nacos/nacos-server:v2.2.3
container_name: erp-nacos
environment:
MODE: standalone
SPRING_DATASOURCE_PLATFORM: mysql
MYSQL_SERVICE_HOST: mysql
MYSQL_SERVICE_DB_NAME: nacos_config
MYSQL_SERVICE_PORT: 3306
MYSQL_SERVICE_USER: root
MYSQL_SERVICE_PASSWORD: root123456
NACOS_AUTH_ENABLE: "true"
NACOS_AUTH_TOKEN: "SecretKey012345678901234567890123456789012345678901234567890123456789"
NACOS_AUTH_IDENTITY_KEY: "serverIdentity"
NACOS_AUTH_IDENTITY_VALUE: "security"
ports:
- "8848:8848"
- "9848:9848"
- "9849:9849"
volumes:
- nacos_logs:/home/nacos/logs
- nacos_data:/home/nacos/data
depends_on:
mysql:
condition: service_healthy
networks:
- erp-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8848/nacos/"]
interval: 15s
timeout: 10s
retries: 5
start_period: 60s
# RocketMQ
rocketmq:
image: apache/rocketmq:5.1.4
container_name: erp-rocketmq
ports:
- "9876:9876"
- "10909:10909"
- "10911:10911"
- "10912:10912"
environment:
NAMESRV_ADDR: "localhost:9876"
JAVA_OPTS: "-Xms1g -Xmx1g"
volumes:
- rocketmq_logs:/home/rocketmq/logs
- rocketmq_store:/home/rocketmq/store
networks:
- erp-network
command: sh mqnamesrv
healthcheck:
test: ["CMD", "sh", "/usr/sbin/rocketmq-ce", "namesrv", "-n", "localhost:9876"]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
# RocketMQ Broker
rocketmq-broker:
image: apache/rocketmq:5.1.4
container_name: erp-rocketmq-broker
ports:
- "10911:10911"
environment:
NAMESRV_ADDR: "rocketmq:9876"
JAVA_OPTS: "-Xms1g -Xmx1g -Xmn512m"
volumes:
- rocketmq_store:/home/rocketmq/store
depends_on:
- rocketmq
networks:
- erp-network
command: sh mqbroker -n rocketmq:9876 -c ../conf/broker.conf
# RocketMQ控制台
rocketmq-console:
image: apacherocketmq/rocketmq-dashboard:latest
container_name: erp-rocketmq-console
ports:
- "8080:8080"
environment:
JAVA_OPTS: "-Drocketmq.namesrv.addr=rocketmq:9876 -Dserver.port=8080"
depends_on:
- rocketmq
networks:
- erp-network
# Seata分布式事务
seata:
image: seataio/seata-server:1.7.1
container_name: erp-seata
ports:
- "8091:8091"
- "7091:7091"
environment:
SEATA_IP: seata
SEATA_PORT: "8091"
STORE_MODE: db
SERVER_NODE: "1"
volumes:
- ./infrastructure/seata/registry.conf:/seata-server/resources/registry.conf
- ./infrastructure/seata/file.conf:/seata-server/resources/file.conf
depends_on:
mysql:
condition: service_healthy
nacos:
condition: service_started
networks:
- erp-network
# SkyWalking OAP
skywalking-oap:
image: apache/skywalking-oap-server:9.7.0
container_name: erp-skywalking-oap
ports:
- "11800:11800"
- "12800:12800"
environment:
SW_STORAGE: elasticsearch7
SW_STORAGE_ES_CLUSTER_NODES: elasticsearch:9200
depends_on:
elasticsearch:
condition: service_healthy
networks:
- erp-network
# SkyWalking UI
skywalking-ui:
image: apache/skywalking-ui:9.7.0
container_name: erp-skywalking-ui
ports:
- "8081:8080"
environment:
SW_OAP_ADDRESS: skywalking-oap:12800
depends_on:
- skywalking-oap
networks:
- erp-network
# Elasticsearch
elasticsearch:
image: elasticsearch:7.17.16
container_name: erp-elasticsearch
environment:
discovery.type: single-node
ES_JAVA_OPTS: -Xms512m -Xmx512m
xpack.security.enabled: "false"
ports:
- "9200:9200"
- "9300:9300"
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- erp-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9200/_cluster/health"]
interval: 15s
timeout: 10s
retries: 5
start_period: 60s
# MinIO对象存储
minio:
image: minio/minio:latest
container_name: erp-minio
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin123
volumes:
- minio_data:/data
command: server /data --console-address ":9001"
networks:
- erp-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 15s
timeout: 10s
retries: 3
start_period: 30s
# ============================================================
# API网关
# ============================================================
gateway:
build:
context: ./gateway
dockerfile: docker/Dockerfile
container_name: erp-gateway
ports:
- "8080:8080"
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JWT_SECRET=your-256-bit-secret-key-for-jwt-token-generation-erp
- JAVA_OPTS=-Xms256m -Xmx512m -XX:+UseG1GC
volumes:
- gateway_logs:/app/logs
networks:
- erp-network
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
nacos:
condition: service_healthy
redis:
condition: service_healthy
# ============================================================
# 业务微服务
# ============================================================
user-service:
build:
context: .
dockerfile: services/user-service/Dockerfile
container_name: erp-user-service
ports:
- "8082:8082"
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- user_logs:/app/logs
networks:
- erp-network
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8082/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
admin-service:
build:
context: .
dockerfile: services/admin-service/Dockerfile
container_name: erp-admin-service
ports:
- "8081:8081"
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- admin_logs:/app/logs
networks:
- erp-network
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8081/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
product-service:
build:
context: .
dockerfile: services/product-service/Dockerfile
container_name: erp-product-service
ports:
- "8083:8083"
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- product_logs:/app/logs
networks:
- erp-network
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8083/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
order-service:
build:
context: .
dockerfile: services/order-service/Dockerfile
container_name: erp-order-service
ports:
- "8084:8082"
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- ORDER_SERVICE_PORT=8082
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- order_logs:/app/logs
networks:
- erp-network
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8082/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
inventory-service:
build:
context: .
dockerfile: services/inventory-service/Dockerfile
container_name: erp-inventory-service
ports:
- "8085:8084"
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- inventory_logs:/app/logs
networks:
- erp-network
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8084/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
tenant-service:
build:
context: .
dockerfile: services/tenant-service/Dockerfile
container_name: erp-tenant-service
ports:
- "8086:8083"
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- tenant_logs:/app/logs
networks:
- erp-network
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8083/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
file-service:
build:
context: .
dockerfile: services/file-service/Dockerfile
container_name: erp-file-service
ports:
- "8090:8082"
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- MINIO_ENDPOINT=http://minio:9000
- MINIO_ACCESS_KEY=minioadmin
- MINIO_SECRET_KEY=minioadmin123
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- file_logs:/app/logs
- file_storage:/app/storage
networks:
- erp-network
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8082/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
minio:
condition: service_healthy
nacos:
condition: service_healthy
scheduled-task-service:
build:
context: .
dockerfile: services/scheduled-task-service/Dockerfile
container_name: erp-scheduled-task-service
ports:
- "8091:8088"
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER_ADDR=nacos:8848
- NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- DB_PASSWORD=erp123456
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=redis123456
- JAVA_OPTS=-Xms512m -Xmx1024m -XX:+UseG1GC
volumes:
- task_logs:/app/logs
networks:
- erp-network
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8088/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
restart: unless-stopped
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
nacos:
condition: service_healthy
# ============================================================
# 可选服务(按需启用,注释掉不需要的服务以节省资源)
# ============================================================
# report-service:
# build:
# context: .
# dockerfile: services/report-service/Dockerfile
# container_name: erp-report-service
# ports:
# - "8092:8084"
# environment:
# - SPRING_PROFILES_ACTIVE=docker
# - NACOS_SERVER_ADDR=nacos:8848
# - DB_HOST=mysql
# - DB_PORT=3306
# - DB_NAME=erp_java
# - DB_USERNAME=erp_user
# - DB_PASSWORD=erp123456
# - REDIS_HOST=redis
# - REDIS_PORT=6379
# - REDIS_PASSWORD=redis123456
# networks:
# - erp-network
# restart: unless-stopped
# depends_on:
# mysql:
# condition: service_healthy
# redis:
# condition: service_healthy
# ============================================================
# 网络定义
# ============================================================
networks:
erp-network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
# ============================================================
# 持久化卷
# ============================================================
volumes:
mysql_data:
redis_data:
nacos_logs:
nacos_data:
rocketmq_logs:
rocketmq_store:
elasticsearch_data:
minio_data:
gateway_logs:
user_logs:
admin_logs:
product_logs:
order_logs:
inventory_logs:
tenant_logs:
file_logs:
file_storage:
task_logs:

310
docs/DEPLOYMENT.md Normal file
View File

@ -0,0 +1,310 @@
# ERP Java Backend 部署文档
## 📁 目录结构
```
erp-java-backend/
├── docker-compose.yml # 全量Docker Compose本地开发/测试)
├── infrastructure/
│ ├── kubernetes/
│ │ ├── erp-global-infra.yaml # 全局K8s配置ConfigMap/Secret/Ingress
│ │ ├── erp-db-init-job.yaml # 数据库初始化Job
│ │ └── kustomization.yaml # Kustomization配置
│ └── mysql/
│ └── init.sql # 数据库初始化SQL
├── gateway/
│ └── docker/
│ ├── Dockerfile # API网关Dockerfile
│ └── docker-compose.yml # 网关独立部署
└── services/
├── {service-name}/
│ ├── Dockerfile # 多阶段构建Dockerfile
│ ├── docker-compose.yml # 服务独立部署
│ └── k8s/
│ └── deployment.yaml # K8s完整部署含HPA/PDB/Ingress
```
## 🚀 快速开始
### 1. 本地Docker Compose启动推荐开发测试用
```bash
cd erp-java-backend
# 启动所有基础设施服务
docker-compose up -d mysql redis nacos
# 启动网关
docker-compose up -d gateway
# 按需启动业务服务
docker-compose up -d user-service product-service order-service
```
### 2. 本地构建并启动
```bash
# 构建所有服务镜像
docker-compose build
# 启动全部服务
docker-compose up -d
# 查看服务状态
docker-compose ps
# 查看日志
docker-compose logs -f user-service
```
### 3. 单服务独立部署
```bash
cd services/user-service
# 构建镜像
docker build -t erp-user-service:1.0.0 -f Dockerfile ../..
# 启动服务
docker-compose up -d
# 查看日志
docker logs -f erp-user-service
```
## ☸️ Kubernetes部署
### 前置要求
- Kubernetes 1.25+
- kubectl configured
- Ingress Controller (nginx-ingress)
- StorageClass (用于持久化存储)
### 部署步骤
```bash
# 1. 创建命名空间
kubectl apply -f infrastructure/kubernetes/erp-global-infra.yaml
# 2. 部署基础设施MySQL/Redis/Nacos等
# 参考 infrastructure/kubernetes/ 下的各服务配置
# 3. 部署业务服务
for svc in user product order inventory tenant; do
kubectl apply -f services/${svc}-service/k8s/deployment.yaml
done
# 4. 使用Kustomization一键部署
kubectl apply -k infrastructure/kubernetes/
# 5. 验证部署
kubectl get pods -n erp-prod
kubectl get svc -n erp-prod
kubectl get ingress -n erp-prod
```
### HPA自动扩缩容
所有服务均已配置HPA基于CPU和内存利用率自动扩缩
```yaml
# 示例user-service HPA配置
spec:
minReplicas: 3 # 最小3个副本
maxReplicas: 10 # 最大10个副本
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70 # CPU 70%时扩容
```
### PDB保护
所有服务配置了PodDisruptionBudget保证滚动更新时最小可用副本数
```bash
kubectl get pdb -n erp-prod
```
## 🔧 服务端口映射
| 服务 | 端口 | K8s Service | Ingress域名 |
|------|------|-------------|-------------|
| gateway | 8080 | gateway | api.erpzbbh.cn |
| user-service | 8082 | user-service | user.erpzbbh.cn |
| admin-service | 8081 | admin-service | admin.erpzbbh.cn |
| product-service | 8083 | product-service | product.erpzbbh.cn |
| tenant-service | 8083 | tenant-service | tenant.erpzbbh.cn |
| permission-service | 8084 | permission-service | permission.erpzbbh.cn |
| inventory-service | 8084 | inventory-service | inventory.erpzbbh.cn |
| order-service | 8082 | order-service | order.erpzbbh.cn |
| file-service | 8082 | file-service | file.erpzbbh.cn |
| scheduled-task-service | 8088 | scheduled-task-service | task.erpzbbh.cn |
| approval-flow-service | 8086 | approval-flow-service | approval.erpzbbh.cn |
| customer-service | 8086 | customer-service | customer.erpzbbh.cn |
| supplier-service | 8086 | supplier-service | supplier.erpzbbh.cn |
| invoice-service | 8086 | invoice-service | invoice.erpzbbh.cn |
| logistics-service | 8086 | logistics-service | logistics.erpzbbh.cn |
| waybill-service | 8086 | waybill-service | waybill.erpzbbh.cn |
| dashboard-service | 8086 | dashboard-service | dashboard.erpzbbh.cn |
| finance-service | 8007 | finance-service | finance.erpzbbh.cn |
| purchase-service | 8010 | purchase-service | purchase.erpzbbh.cn |
| reconciliation-service | 8018 | reconciliation-service | reconciliation.erpzbbh.cn |
| report-service | 8084 | report-service | report.erpzbbh.cn |
| sku-match-service | 8084 | sku-match-service | skumatch.erpzbbh.cn |
| notification-service | 8087 | notification-service | notification.erpzbbh.cn |
| system-tool-service | 8087 | system-tool-service | systemtool.erpzbbh.cn |
| print-service | 8089 | print-service | print.erpzbbh.cn |
| aftersale-service | 8087 | aftersale-service | aftersale.erpzbbh.cn |
| audit-log-service | 8098 | audit-log-service | audit.erpzbbh.cn |
| category-service | 8085 | category-service | category.erpzbbh.cn |
| data-import-export-service | 8088 | data-import-export-service | dataie.erpzbbh.cn |
| warehouse-service | 8084 | warehouse-service | warehouse.erpzbbh.cn |
| ai-service | 8087 | ai-service | ai.erpzbbh.cn |
## 🐳 Dockerfile规范
所有Dockerfile统一使用多阶段构建
```dockerfile
# Stage 1: Build
FROM maven:3.9-eclipse-temurin-17-alpine AS builder
WORKDIR /app
COPY pom.xml .
RUN mvn dependency:go-offline -B # 利用Docker缓存
COPY . .
WORKDIR /app/services/{service}
RUN mvn clean package -DskipTests -B
# Stage 2: Runtime
FROM eclipse-temurin:17-jre-alpine
RUN addgroup -g 1001 -S appgroup && \
adduser -u 1001 -S appuser -G appgroup
WORKDIR /app
COPY --from=builder /app/services/{service}/target/*.jar app.jar
RUN chown appuser:appgroup app.jar
ENV JAVA_OPTS="-Xms512m -Xmx1024m -XX:+UseG1GC"
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
CMD wget -q --spider http://localhost:{port}/actuator/health || exit 1
EXPOSE {port}
USER appuser
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -jar app.jar"]
```
## 🔍 健康检查
所有服务暴露以下健康检查端点:
- `/actuator/health` - 基础健康检查
- `/actuator/health/liveness` - K8s livenessProbe
- `/actuator/health/readiness` - K8s readinessProbe
- `/actuator/prometheus` - Prometheus监控指标
## 📊 资源限制
| 服务类型 | CPU Request | CPU Limit | Memory Request | Memory Limit |
|---------|-------------|-----------|----------------|--------------|
| 重量级服务 | 250m | 1000m | 512Mi | 1Gi |
| 轻量级服务 | 100m | 500m | 256Mi | 512Mi |
| 基础设施 | 100m | 500m | 256Mi | 512Mi |
## 🔐 安全配置
- 所有容器以非root用户运行 (UID 1001)
- 使用Alpine轻量级基础镜像
- 敏感配置通过K8s Secret注入
- 生产环境建议使用外部密钥管理Vault/AWS Secrets Manager
## 📝 数据库迁移
### K8s Job方式推荐生产
```bash
# 部署数据库初始化Job
kubectl apply -f infrastructure/kubernetes/erp-db-init-job.yaml
# 查看Job状态
kubectl get job erp-db-init -n erp-prod
kubectl logs job/erp-db-init -n erp-prod
```
### Docker Compose方式
```bash
# 初始化脚本会自动执行 init.sql
docker-compose up -d mysql
```
## 🌐 Ingress配置
所有服务均配置了TLS Ingress示例
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: user-service-ingress
namespace: erp-prod
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "10m"
spec:
ingressClassName: nginx
tls:
- hosts:
- user-service.erpzbbh.cn
secretName: user-service-tls
rules:
- host: user-service.erpzbbh.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: user-service
port:
name: http
```
## 🔧 故障排查
```bash
# 查看Pod日志
kubectl logs -f deployment/user-service -n erp-prod
# 进入Pod调试
kubectl exec -it deployment/user-service -n erp-prod -- sh
# 查看Pod事件
kubectl describe pod -n erp-prod -l app=user-service
# 查看资源使用
kubectl top pod -n erp-prod
# 重启Deployment
kubectl rollout restart deployment/user-service -n erp-prod
# 回滚到上一版本
kubectl rollout undo deployment/user-service -n erp-prod
```
## 📋 环境变量参考
| 变量名 | 说明 | 示例值 |
|--------|------|--------|
| SPRING_PROFILES_ACTIVE | 激活的配置环境 | prod |
| NACOS_SERVER_ADDR | Nacos地址 | nacos:8848 |
| NACOS_NAMESPACE | Nacos命名空间 | prod |
| DB_HOST | 数据库主机 | mysql |
| DB_PORT | 数据库端口 | 3306 |
| DB_NAME | 数据库名 | erp_java |
| DB_USERNAME | 数据库用户名 | erp_user |
| DB_PASSWORD | 数据库密码 | * |
| REDIS_HOST | Redis主机 | redis |
| REDIS_PORT | Redis端口 | 6379 |
| REDIS_PASSWORD | Redis密码 | * |
| JAVA_OPTS | JVM参数 | -Xms512m -Xmx1024m |

View File

@ -0,0 +1,249 @@
# 基础设施部署文档
## 概述
本项目使用 Docker Compose 管理所有基础设施服务MySQL、Redis、Nacos、RocketMQ、Seata 等)。
**重要:国内环境无法直接从 Docker Hub 拉取镜像,需要配置国内镜像加速器。**
---
## 前提条件
### 1. 安装 Docker 和 Docker Compose
```bash
# 安装 Docker
curl -fsSL https://get.docker.com | bash
# 安装 Docker Compose
apt-get install -y docker-compose
# 验证安装
docker --version
docker-compose --version
```
### 2. 配置国内 Docker 镜像加速
创建或编辑 `/etc/docker/daemon.json`
```json
{
"registry-mirrors": [
"https://dockerproxy.cn",
"https://docker.rainbond.cc",
"https://docker.m.daocloud.io",
"https://docker.wanpeng.top"
],
"storage-driver": "overlay2"
}
```
然后重启 Docker
```bash
systemctl restart docker
```
---
## 基础设施服务列表
| 服务 | 镜像 | 端口 | 说明 |
|------|------|------|------|
| MySQL | mysql:8.0 | 3307 | 数据库 |
| Redis | redis:7-alpine | 6379 | 缓存 |
| Nacos | nacos/nacos-server:v2.2.3 | 8848, 9848, 9849 | 服务注册与配置中心 |
| RocketMQ | apache/rocketmq:5.1.4 | 9876, 10909, 10911 | 消息队列 |
| Seata | seataio/seata-server:1.7.1 | 8091, 7091 | 分布式事务 |
---
## 启动基础设施
### 方法一:使用 Docker Compose推荐
```bash
# 进入项目目录
cd /path/to/erp-java-backend
# 启动所有基础设施服务(不包括业务服务)
docker-compose up -d mysql redis nacos rocketmq rocketmq-broker rocketmq-console seata
# 启动所有基础设施(完整)
docker-compose up -d
```
### 方法二:逐个启动
```bash
# 1. 启动 MySQL
docker-compose up -d mysql
# 2. 启动 Redis
docker-compose up -d redis
# 3. 等待 MySQL 健康检查通过后,启动 Nacos
docker-compose up -d nacos
# 4. 启动 RocketMQ
docker-compose up -d rocketmq rocketmq-broker
```
### 验证服务状态
```bash
# 查看运行中的容器
docker-compose ps
# 查看容器日志
docker-compose logs -f nacos
# 健康检查
curl http://localhost:8848/nacos/
```
---
## Nacos 下载地址(国内镜像)
如果需要手动下载 Nacos Server以下是国内可用的镜像源
### 华为云镜像(推荐)
```
https://repo.huaweicloud.com/alibaba/nacos/2.2.3/nacos-server-2.2.3.tar.gz
```
### 中科大镜像
```
https://mirrors.ustc.edu.cn/github/alibaba/nacos/v2.2.3/nacos-server-2.2.3.tar.gz
```
### 腾讯云镜像
```
https://mirrors.cloud.tencent.com/github/alibaba/nacos/2.2.3/nacos-server-2.2.3.tar.gz
```
### Maven 中央仓库
```
https://repo.maven.apache.org/maven2/com/alibaba/nacos/nacos-server/2.2.3/nacos-server-2.2.3.tar.gz
```
### GitHub 直链(需要代理)
```
https://github.com/alibaba/nacos/releases/download/2.2.3/nacos-server-2.2.3.tar.gz
```
### Nacos Docker 镜像
```bash
# Docker Hub 镜像(需要配置加速器)
docker pull nacos/nacos-server:v2.2.3
# 或者使用国内镜像
docker pull dockerpull.cn/nacos/nacos-server:v2.2.3
```
---
## 启动后配置
### Nacos 控制台
1. 访问 http://localhost:8848/nacos
2. 默认用户名:`nacos`
3. 默认密码:`nacos`
### 创建配置
Nacos 启动后会自动创建 `nacos_config` 数据库,但可能需要手动执行初始化脚本:
```bash
# 查看 Nacos 初始化 SQL
ls /root/.openclaw/workspace/erp-java-backend/nacos/init/
```
---
## 常见问题
### 1. Docker Hub 无法访问
**问题**`Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp`
**解决**
1. 配置国内 Docker 镜像加速器(见上文)
2. 或使用代理
### 2. Nacos 启动失败
**检查**
```bash
# 查看 Nacos 日志
docker-compose logs nacos
# 检查 MySQL 是否就绪
docker-compose ps mysql
```
### 3. 服务无法连接到 Nacos
**检查**
1. Nacos 是否已启动并通过健康检查
2. 网络是否正确配置
3. 防火墙是否开放 8848 端口
### 4. MySQL 连接失败
**检查**
```bash
# 查看 MySQL 日志
docker-compose logs mysql
# 测试连接
mysql -h localhost -P 3307 -uroot -proot123456
```
---
## 停止服务
```bash
# 停止所有服务(保留数据)
docker-compose stop
# 停止并删除容器(保留数据卷)
docker-compose down
# 停止并删除所有数据(危险!)
docker-compose down -v
```
---
## 端口占用检查
如果端口被占用:
```bash
# Linux
netstat -tlnp | grep 8848
lsof -i :8848
# 或使用 ss
ss -tlnp | grep 8848
```
---
## 资源要求
| 服务 | 最低内存 | 推荐内存 |
|------|---------|---------|
| MySQL | 512MB | 1GB |
| Redis | 128MB | 256MB |
| Nacos | 512MB | 1GB |
| RocketMQ | 1GB | 2GB |
| Seata | 256MB | 512MB |
**总计:约 2.5GB - 4GB**

View File

@ -0,0 +1,173 @@
# Nacos 下载与部署状态报告
**生成时间**: 2026-04-05 20:30 CST
**工作目录**: /root/.openclaw/workspace/erp-java-backend
---
## 执行结果摘要
| 项目 | 状态 | 详情 |
|------|------|------|
| 项目编译 | ✅ 成功 | Maven 3.8.7, Java 21 |
| Maven 依赖下载 | ✅ 成功 | Maven Central 可访问 |
| Nacos 下载 (华为云) | ❌ 失败 | 返回 HTML 页面(非 tar.gz |
| Nacos 下载 (中科大) | ❌ 失败 | 文件为空 |
| Nacos 下载 (腾讯云) | ❌ 失败 | 文件为空 |
| Nacos 下载 (Maven Central) | ❌ 失败 | 文件为空 |
| Docker Hub 拉取 | ❌ 失败 | 连接被拒绝 |
| 中国 Docker 镜像 | ❌ 失败 | 返回 HTML 页面 |
---
## 详细测试结果
### 1. Nacos 镜像源测试
```bash
# 华为云镜像 - 返回 HTML不是 tar.gz
wget https://repo.huaweicloud.com/alibaba/nacos/2.2.3/nacos-server-2.2.3.tar.gz
结果: 文件大小 9KB类型 HTML document
# 中科大镜像 - 文件为空
wget https://mirrors.ustc.edu.cn/github/alibaba/nacos/v2.2.3/nacos-server-2.2.3.tar.gz
结果: 文件大小 0 字节
# 腾讯云镜像 - 文件为空
wget https://mirrors.cloud.tencent.com/github/alibaba/nacos/2.2.3/nacos-server-2.2.3.tar.gz
结果: 文件大小 0 字节
# Maven 中央仓库 - 文件为空
wget https://repo.maven.apache.org/maven2/com/alibaba/nacos/nacos-server/2.2.3/nacos-server-2.2.3.tar.gz
结果: 文件大小 0 字节
# GitHub 直链 - 超时(被屏蔽)
curl -L https://github.com/alibaba/nacos/releases/download/2.2.3/nacos-server-2.2.3.tar.gz
结果: 连接超时
```
### 2. Docker 镜像拉取测试
```bash
# Docker Hub (官方)
docker pull nacos/nacos-server:v2.2.3
结果: Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp 185.45.6.57:443: connection refused
# Docker 代理镜像
docker pull dockerpull.cn/nacos/nacos-server:v2.2.3
结果: 返回 HTML 页面
```
### 3. 项目编译测试
```bash
cd /root/.openclaw/workspace/erp-java-backend
mvn compile -pl services/user-service -am -DskipTests
结果: BUILD SUCCESS
```
---
## 现有环境状态
### 可用服务
| 服务 | 地址 | 端口 | 状态 |
|------|------|------|------|
| Redis | localhost | 6379 | ✅ 运行中(无密码) |
| MySQL | localhost | 3306 | ✅ 运行中(需要密码) |
| Maven Central | - | - | ✅ 可访问 |
### 已验证
- ✅ 项目可以成功编译
- ✅ Maven 依赖可以下载
- ✅ Redis 服务运行正常
### 无法访问
- ❌ Docker Hub (registry-1.docker.io)
- ❌ GitHub (github.com)
- ❌ 华为云 Nacos 镜像
- ❌ 中科大 Nacos 镜像
- ❌ 腾讯云 Nacos 镜像
- ❌ Maven Central Nacos 目录
---
## 本地环境信息
```bash
# Docker
Docker: 28.2.2
Docker Compose: 未安装
# Java
Java: OpenJDK 21.0.10
Maven: 3.8.7
# 操作系统
OS: Ubuntu (Linux 6.8.0-106-generic)
```
---
## 问题分析
### 根本原因
**网络隔离**:当前环境无法访问以下关键资源:
1. Docker Hub (海外 Docker 镜像仓库)
2. GitHub (代码托管和发布页)
3. 国内 Nacos 镜像源(疑似路径变更或访问限制)
### 影响范围
- 无法使用 `docker-compose up` 启动基础设施
- 无法手动下载 Nacos Server
- 无法拉取任何 Docker 镜像
### 可能的解决方案
#### 方案一:配置 HTTP 代理
如果公司有 HTTP 代理:
```bash
export HTTP_PROXY=http://proxy.company.com:8080
export HTTPS_PROXY=http://proxy.company.com:8080
```
#### 方案二:手动打包传输
在有网络的机器上:
```bash
# 下载 Nacos
wget https://github.com/alibaba/nacos/releases/download/2.2.3/nacos-server-2.2.3.tar.gz
# 传输到目标服务器
scp nacos-server-2.2.3.tar.gz user@target:/opt/nacos/
```
#### 方案三:使用 VM 镜像
如果云服务商提供预配置 VM 镜像,可以直接使用包含 Nacos 的镜像。
#### 方案四:联系网络管理员
请求开放以下白名单:
- `repo.huaweicloud.com`
- `mirrors.ustc.edu.cn`
- `mirrors.cloud.tencent.com`
- `registry-1.docker.io`
---
## 后续建议
### 短期
1. **获取 Nacos 安装包**通过其他途径U盘、代理等获取 Nacos 2.2.3 安装包
2. **配置 MySQL**:设置正确的 MySQL root 密码
3. **手动部署**:按照 DEPLOYMENT_INFRASTRUCTURE.md 手动启动各服务
### 长期
1. **建立内部镜像仓库**:搭建 Nexus/Artifactory 托管必要的 Docker 镜像和 JAR 依赖
2. **离线部署方案**:准备完整的离线部署包
3. **网络策略调整**:与运维协商开放必要的镜像源
---
## 相关文档
- [基础设施部署文档](./DEPLOYMENT_INFRASTRUCTURE.md)
- [Nacos 下载地址清单](./NACOS_DOWNLOAD_STATUS.md)

View File

@ -0,0 +1,221 @@
# 微服务数据库拆分设计方案
## 1. 现状分析
### 当前PHP单库结构
```
erp_db (单库,所有表)
├── 用户相关表
│ ├── users
│ ├── roles
│ ├── permissions
│ └── user_roles
├── 订单相关表
│ ├── orders
│ ├── order_items
│ └── order_operations
├── 商品相关表
│ ├── goods
│ ├── brands
│ ├── suppliers
│ └── categories
├── 库存相关表
│ ├── stock
│ └── stock_logs
├── 财务相关表
│ ├── finance_transactions
│ └── account_balances
└── 其他业务表
├── files
├── notifications
├── tenants
└── system_configs
```
## 2. 拆分原则
### 2.1 按业务领域拆分
- 每个微服务拥有独立的数据库
- 数据库按业务领域划分
- 服务间通过API或消息队列通信
### 2.2 数据所有权原则
- 谁创建数据,谁拥有数据
- 其他服务通过服务间调用访问数据
- 避免直接数据库访问
### 2.3 数据一致性策略
- 强一致性分布式事务Seata
- 最终一致性:消息队列+事件驱动
- 数据同步CDC变更数据捕获
## 3. 拆分方案
### 3.1 数据库拆分设计
| 微服务 | 数据库名 | 包含表 | 数据量预估 |
|--------|----------|--------|------------|
| **用户服务** | `user_db` | users, roles, permissions, user_roles, login_logs | 10万+ |
| **订单服务** | `order_db` | orders, order_items, order_operations, order_logs | 100万+ |
| **商品服务** | `product_db` | goods, brands, suppliers, categories, product_prices | 5万+ |
| **库存服务** | `inventory_db` | stock, stock_logs, warehouses, locations | 50万+ |
| **财务服务** | `finance_db` | finance_transactions, account_balances, invoices, payments | 50万+ |
| **文件服务** | `file_db` | files, file_groups, file_metadata | 100万+ |
| **租户服务** | `tenant_db` | tenants, packages, tenant_features | 1万+ |
| **通知服务** | `notification_db` | notifications, notification_templates, message_queue | 100万+ |
| **配置服务** | `config_db` | system_configs, config_groups, config_history | 1万+ |
### 3.2 共享数据表
| 表名 | 用途 | 归属服务 | 访问方式 |
|------|------|----------|----------|
| **字典表** | 通用字典数据 | 配置服务 | API调用 |
| **地区表** | 省市区数据 | 配置服务 | API调用 |
| **日志表** | 操作日志 | 日志服务 | 消息队列 |
## 4. 迁移策略
### 4.1 分阶段迁移
```
阶段1: 数据库逻辑拆分 (本周)
- 创建独立数据库
- 迁移表结构
- 保持PHP系统正常运行
阶段2: 数据同步 (下周)
- 建立数据同步通道
- 实时同步关键数据
- 验证数据一致性
阶段3: 服务切换 (下下周)
- 逐步切换服务到新数据库
- 监控数据一致性
- 回滚机制准备
阶段4: 完全切换 (下下下周)
- 所有服务使用新数据库
- 停用旧数据库
- 清理同步通道
```
### 4.2 数据同步方案
#### 方案1: 双写模式
```
PHP应用 → 同时写入 → 旧数据库 + 新数据库
数据一致性校验
```
#### 方案2: CDC同步模式
```
旧数据库 → Debezium CDC → Kafka → 新数据库
(变更数据捕获) (消息队列)
```
#### 方案3: 应用层同步
```
PHP应用 → 写入旧数据库 → 同步服务 → 写入新数据库
```
## 5. 技术实现
### 5.1 数据库连接配置
```yaml
# 用户服务配置
spring:
datasource:
url: jdbc:mysql://mysql-server:3306/user_db
username: user_service
password: ${DB_PASSWORD}
# 订单服务配置
spring:
datasource:
url: jdbc:mysql://mysql-server:3306/order_db
username: order_service
password: ${DB_PASSWORD}
```
### 5.2 数据同步配置
```yaml
# Debezium CDC配置
debezium:
connector:
class: io.debezium.connector.mysql.MySqlConnector
database.hostname: mysql-server
database.port: 3306
database.user: cdc_user
database.password: ${CDC_PASSWORD}
database.server.id: 184054
database.server.name: erp_cdc
table.include.list: erp_db.*
database.history.kafka.bootstrap.servers: kafka:9092
database.history.kafka.topic: dbhistory.erp
```
### 5.3 服务间数据访问
```java
// 订单服务需要用户信息时调用用户服务API
@FeignClient(name = "user-service")
public interface UserServiceClient {
@GetMapping("/api/users/{userId}")
UserDTO getUserById(@PathVariable Long userId);
}
// 而不是直接查询用户数据库
```
## 6. 风险与应对
### 6.1 数据一致性风险
- **风险**: 拆分过程中数据不一致
- **应对**: 实时数据校验,不一致时告警并自动修复
### 6.2 性能风险
- **风险**: 服务间调用增加延迟
- **应对**: 缓存常用数据,批量查询优化
### 6.3 复杂性风险
- **风险**: 系统复杂度增加
- **应对**: 完善监控告警,自动化运维
## 7. 实施计划
### 7.1 时间计划
```
第1周: 方案设计 + 环境准备
第2周: 数据库创建 + 表结构迁移
第3周: 数据同步通道建立
第4周: 服务逐步切换
第5周: 完全切换 + 验证
```
### 7.2 团队分工
- **DBA团队**: 数据库拆分、备份恢复
- **开发团队**: 服务改造、API适配
- **运维团队**: 监控部署、性能优化
- **测试团队**: 数据一致性验证
## 8. 成功指标
### 8.1 技术指标
- 数据一致性: 99.99%
- 服务响应时间: < 200ms (P95)
- 系统可用性: 99.9%
### 8.2 业务指标
- 零数据丢失
- 业务无感知切换
- 性能不下降
---
**下一步行动:**
1. 评审设计方案
2. 准备数据库环境
3. 开始阶段1实施

View File

@ -0,0 +1,37 @@
-- ============================================================
-- V2.4 幂等性支持:唯一索引
-- ============================================================
-- 订单表:平台订单号+平台+店铺 唯一,防止重复创建订单
-- 库存操作记录表SKU+仓库+关联单号 唯一,防止重复扣减/锁定/解锁
-- 物流轨迹表:(运单号, 轨迹时间, 地点) 已有去重,新增强制唯一约束
-- ============================================================
-- 1. orders 表:平台订单号+平台+店铺 唯一索引
-- 确保同一店铺同一平台订单号不会重复创建
ALTER TABLE orders
ADD CONSTRAINT uk_orders_platform_order_sn
UNIQUE (platform_order_sn, platform, shop_id);
-- 2. stocks 表:(sku_code, warehouse_id) 已有主键,验证唯一性
-- 确保每个仓库的SKU库存记录唯一
ALTER TABLE stocks
ADD CONSTRAINT uk_stocks_sku_warehouse
UNIQUE (sku_code, warehouse_id);
-- 3. stock_logs 表:(sku_code, warehouse_id, related_no, type) 唯一索引
-- 防止同一操作重复记录(如重复扣减、重复解锁)
-- 注意:如果 stock_logs 表没有这些字段,请根据实际表结构调整
-- 以下为假设字段名,请根据实际情况修改
-- ALTER TABLE stock_logs
-- ADD CONSTRAINT uk_stock_logs_operation
-- UNIQUE (sku_code, warehouse_id, related_no, operation_type);
-- 4. waybill_status 表:(waybill_no) 已有唯一索引,验证是否存在
-- 确保运单号唯一
-- ALTER TABLE waybill_status ADD UNIQUE (waybill_no);
-- 5. logistics_trace 表:(waybill_no, trace_time, location) 唯一索引
-- 确保同一运单同一时间同一地点的轨迹不重复
ALTER TABLE logistics_trace
ADD CONSTRAINT uk_logistics_trace_waybill_time_location
UNIQUE (waybill_no, trace_time, location);

View File

@ -0,0 +1,345 @@
# ERP系统前后端功能对比分析报告
**分析时间:** 2026-04-05
**分析范围:** 前端API模块 vs 后端服务实现
**分析重点:** 订单模块、库存联动、财务联动
## 一、前端API模块统计
### 1.1 前端API文件总数37个
主要模块分类:
| 模块类别 | 文件数量 | 关键文件 |
|---------|---------|---------|
| **订单相关** | 3 | order.ts, orderPull.ts, delivery.ts |
| **商品管理** | 4 | goods.ts, platformGoods.ts, platform-product.ts, sku-match.ts |
| **库存管理** | 2 | stock.ts, warehouse.ts |
| **采购管理** | 2 | purchase.ts, supplier.ts |
| **售后管理** | 1 | afterSale.ts |
| **报表统计** | 2 | report.ts, tenant-report.ts |
| **用户权限** | 3 | user.ts, role.ts, auth.ts |
| **打印相关** | 3 | print.ts, print-job.ts, print-plugin.ts |
| **其他** | 17 | 包括平台对接、通知、模板等 |
### 1.2 前端API函数总数349个
按模块分布:
- 订单模块23个API函数
- 库存模块约15个API函数
- 采购模块约12个API函数
- 售后模块8个API函数
- 报表模块5个API函数
## 二、后端Controller统计
### 2.1 后端Controller总数59个
按服务分布:
- order-service: 1个主要Controller
- warehouse-service: 1个库存Controller
- purchase-service: 4个采购相关Controller
- tenant-service: 包含财务、报表等Controller
- 其他服务:用户、商品、文件等
## 三、订单模块深度对比分析
### 3.1 前端订单API23个函数
**核心功能分类:**
1. **订单查询4个**
- `getOrderList()` - 订单列表
- `getOrderDetail()` - 订单详情
- `getPendingMatchOrders()` - 待匹配订单
- `getPendingAuditOrders()` - 待审核订单
2. **订单拉取与创建1个**
- `pullOrders()` - 从平台拉取订单
3. **订单审核3个**
- `auditOrder()` - 单个审核
- `batchAuditOrders()` - 批量审核
- `batchOperation()` - 批量操作
4. **订单处理流程5个**
- `setWarehouseExpress()` - 设置仓库快递
- `shipOrder()` - 发货
- `completeOrder()` - 完成订单
- `syncOrderToPlatform()` - 同步到平台
- `matchOrder()` - 商品匹配
5. **统计与导出3个**
- `getOrderStatistics()` - 订单统计
- `exportOrders()` - 导出订单
- `getDashboardStats()` - 仪表板统计
6. **备注与日志2个**
- `updateOrderRemark()` - 更新备注
- `getOrderLogs()` - 获取订单日志
7. **选项接口5个**
- `getStatusOptions()` - 状态选项
- `getAuditStatusOptions()` - 审核状态选项
- `getDeliveryStatusOptions()` - 发货状态选项
- `getPlatformOptions()` - 平台选项
- `getShopOptions()` - 店铺选项
### 3.2 后端订单Controller21个端点
**端点映射对比:**
| 前端API | 后端端点 | 状态 | 备注 |
|---------|---------|------|------|
| `getOrderList()` | `GET /api/orders` | ✅ 已实现 | 功能完整 |
| `getOrderDetail()` | `GET /api/orders/{id}` | ✅ 已实现 | 功能完整 |
| `pullOrders()` | `POST /api/orders/pull` | ✅ 已实现 | 功能完整 |
| `auditOrder()` | `POST /api/orders/{id}/audit` | ✅ 已实现 | 功能完整 |
| `batchAuditOrders()` | `POST /api/orders/batch-audit` | ✅ 已实现 | 功能完整 |
| `setWarehouseExpress()` | `PUT /api/orders/{id}/warehouse` | ✅ 已实现 | 功能完整 |
| `shipOrder()` | `POST /api/orders/{id}/ship` | ✅ 已实现 | 功能完整 |
| `completeOrder()` | `POST /api/orders/{id}/complete` | ✅ 已实现 | 功能完整 |
| `batchOperation()` | `POST /api/orders/batch-operation` | ✅ 已实现 | 功能完整 |
| `getOrderStatistics()` | `GET /api/orders/statistics` | ✅ 已实现 | 功能完整 |
| `getDashboardStats()` | `GET /api/orders/dashboard` | ✅ 已实现 | 功能完整 |
| `exportOrders()` | `GET /api/orders/export` | ✅ 已实现 | 功能完整 |
| `updateOrderRemark()` | `PUT /api/orders/{id}/remark` | ✅ 已实现 | 功能完整 |
| `getOrderLogs()` | `GET /api/orders/{id}/logs` | ✅ 已实现 | 功能完整 |
| `syncOrderToPlatform()` | `POST /api/orders/{id}/sync` | ✅ 已实现 | 功能完整 |
| `getStatusOptions()` | `GET /api/orders/options/status` | ✅ 已实现 | 功能完整 |
| `getAuditStatusOptions()` | `GET /api/orders/options/audit-status` | ✅ 已实现 | 功能完整 |
| `getDeliveryStatusOptions()` | `GET /api/orders/options/delivery-status` | ✅ 已实现 | 功能完整 |
| `getPlatformOptions()` | `GET /api/orders/options/platforms` | ✅ 已实现 | 功能完整 |
| `getShopOptions()` | `GET /api/orders/options/shops` | ✅ 已实现 | 功能完整 |
| `getPendingMatchOrders()` | ❌ 无对应端点 | ⚠️ **缺失** | 前端有但后端无 |
| `getPendingAuditOrders()` | ❌ 无对应端点 | ⚠️ **缺失** | 前端有但后端无 |
| `matchOrder()` | ❌ 无对应端点 | ⚠️ **缺失** | 前端有但后端无 |
| `cancelOrder()` | `POST /api/orders/{id}/cancel` | ✅ 已实现 | 后端有但前端未调用 |
## 四、关键缺失功能分析
### 4.1 订单处理流程完整性分析
**订单状态流转:**
```
待处理 → 待审核 → 已审核 → 已发货 → 已完成
↓ ↓ ↓ ↓ ↓
pending → auditing → shipped → completed
```
**缺失环节分析:**
1. **待匹配订单功能缺失**
- 前端:`getPendingMatchOrders()` 存在
- 后端无对应Controller端点
- 影响:无法查看需要商品匹配的订单
2. **待审核订单专用接口缺失**
- 前端:`getPendingAuditOrders()` 存在
- 后端:无专用端点,需通过`getOrderList()`过滤
- 影响:审核效率降低
3. **商品匹配功能缺失**
- 前端:`matchOrder()` 存在
- 后端:无对应端点
- 影响无法将平台商品与ERP商品关联
### 4.2 库存联动分析
**前端库存API**
- `getStockList()` - 库存列表
- `getStockDetail()` - 库存详情
- `inboundStock()` - 入库操作
- `outboundStock()` - 出库操作
- `adjustStock()` - 库存调整
- `getStockLogs()` - 库存流水
**后端库存Controller**
- `GET /api/stock/list` - 库存列表 ✅
- `GET /api/stock/detail` - 库存详情 ✅
- `POST /api/stock/inbound` - 手动入库 ✅
- `POST /api/stock/outbound` - 手动出库 ✅
- `GET /api/stock/logs` - 库存流水 ✅
**库存联动状态:**
1. ✅ **订单发货自动扣减库存** - **已实现**
- 通过`StockClient`服务间调用实现
- 代码位置:`OrderServiceImpl.deductStockForOrder()`
2. ⚠️ **库存预警与订单关联** - 需要确认是否实现
3. ⚠️ **库存锁定机制** - 需要确认是否实现
### 4.3 财务联动分析
**前端财务相关API**
- `getFinanceReport()` - 资金报表在report.ts中
**后端财务Controller**
- `GET /api/tenant/finance/records` - 财务记录 ✅
- `GET /api/tenant/finance/summary` - 财务汇总 ✅
- `POST /api/tenant/finance/receipt` - 创建收款 ✅
- `POST /api/tenant/finance/payment` - 创建付款 ✅
- `GET /api/tenant/finance/reconciliation` - 对账记录 ✅
**财务联动状态:**
1. ⚠️ **订单完成自动生成应收款** - **部分实现**
- 发现财务服务接口存在,但订单模块集成待确认
2. ⚠️ **采购单完成自动生成应付款** - **部分实现**
- `PurchaseOrderServiceImpl`中调用了`FinanceFeignClient`
3. ⚠️ **财务报表与订单数据关联** - 需要确认是否实现
### 4.4 售后模块分析
**前端售后API**
- `getAfterSalesList()` - 售后列表
- `getAfterSaleDetail()` - 售后详情
- `createAfterSale()` - 创建售后
- `updateAfterSaleStatus()` - 更新状态
- `deleteAfterSale()` - 删除售后
- `getAvailableOrders()` - 可售后订单
- `getAllAvailableOrders()` - 所有可售后订单
- `getAfterSaleStats()` - 售后统计
**后端售后Controller**
- ❌ **未找到对应的Controller**
- ⚠️ **严重缺失:整个售后模块后端未实现**
## 五、缺失功能清单(按优先级排序)
### 优先级 P0核心功能缺失
| 功能名称 | 前端需要 | 后端状态 | 缺失程度 | 影响 |
|---------|---------|---------|---------|------|
| 售后管理模块 | ✅ 8个API | ❌ 未实现 | 🔴 严重 | 无法处理退货退款 |
| 订单商品匹配 | ✅ `matchOrder()` | ❌ 未实现 | 🔴 严重 | 无法关联平台商品与ERP商品 |
| 待匹配订单查询 | ✅ `getPendingMatchOrders()` | ❌ 未实现 | 🔴 严重 | 无法查看需要匹配的订单 |
| 待审核订单专用接口 | ✅ `getPendingAuditOrders()` | ❌ 未实现 | 🟡 中等 | 审核效率低 |
### 优先级 P1重要功能缺失
| 功能名称 | 前端需要 | 后端状态 | 缺失程度 | 影响 |
|---------|---------|---------|---------|------|
| 财务与订单联动 | ✅ 资金报表 | ⚠️ 部分实现 | 🟡 中等 | 财务数据不完整 |
| 库存自动扣减 | ⚠️ 隐含需求 | ❓ 待确认 | 🟡 中等 | 库存数据不准确 |
| 报表完整实现 | ✅ 5个报表API | ⚠️ 部分实现 | 🟡 中等 | 数据分析能力弱 |
### 优先级 P2优化功能缺失
| 功能名称 | 前端需要 | 后端状态 | 缺失程度 | 影响 |
|---------|---------|---------|---------|------|
| 批量操作优化 | ✅ `batchOperation()` | ✅ 已实现 | 🟢 轻微 | 功能完整 |
| 状态流转验证 | ⚠️ 隐含需求 | ❓ 待确认 | 🟢 轻微 | 状态管理可能混乱 |
| 操作日志完整 | ✅ `getOrderLogs()` | ✅ 已实现 | 🟢 轻微 | 功能完整 |
## 六、订单处理流程完整性验证
### 6.1 标准ERP订单流程
```
1. 订单拉取 → 2. 商品匹配 → 3. 订单审核 → 4. 仓库分配 → 5. 发货 → 6. 完成 → 7. 财务结算
```
### 6.2 当前实现状态
- ✅ **步骤1订单拉取** - 已实现
- ❌ **步骤2商品匹配** - 缺失(关键环节)
- ✅ **步骤3订单审核** - 已实现
- ✅ **步骤4仓库分配** - 已实现
- ✅ **步骤5发货** - 已实现
- ✅ **步骤6完成** - 已实现
- ⚠️ **步骤7财务结算** - 部分实现
### 6.3 库存联动验证
- ✅ **发货扣减库存** - **已实现**(代码审查确认)
- 位置:`OrderServiceImpl.deductStockForOrder()`
- 机制:通过`StockClient`调用库存服务扣减
- ⚠️ **库存锁定机制** - 需要进一步确认
- ⚠️ **库存预警** - 需要进一步确认
### 6.4 财务联动验证
- ⚠️ **订单完成自动生成应收款** - **部分实现**
- 发现:`PurchaseOrderServiceImpl`中有财务客户端调用
- 状态:采购模块有财务联动,订单模块待确认
- ⚠️ **采购单完成自动生成应付款** - **部分实现**
- 发现:`FinanceFeignClient`存在
- 状态:财务服务接口已定义,集成程度待确认
## 七、建议与改进方案
### 7.1 立即修复P0优先级
1. **实现售后管理模块**
- 创建 `after-sale-service`
- 实现售后单CRUD、状态流转、退款处理
- 关联订单和库存模块
2. **实现商品匹配功能**
- 在 `order-service` 添加匹配接口
- 实现平台SKU与ERP商品关联
- 添加匹配规则和算法
3. **完善订单查询接口**
- 添加 `GET /api/orders/pending-match`
- 添加 `GET /api/orders/pending-audit`
- 优化查询性能
### 7.2 近期优化P1优先级
1. **强化财务联动**
- 订单完成自动生成应收款
- 采购单完成自动生成应付款
- 实现自动对账功能
2. **完善库存联动**
- 发货时自动扣减库存
- 实现库存锁定机制
- 添加库存预警功能
3. **报表系统完善**
- 实现完整的销售报表
- 实现完整的采购报表
- 实现完整的库存报表
### 7.3 长期规划P2优先级
1. **流程优化**
- 实现工作流引擎
- 添加审批流程配置
- 实现自动化规则
2. **性能优化**
- 大数据量分页优化
- 缓存机制实现
- 异步处理优化
## 八、技术债务评估
### 8.1 架构层面
- **微服务拆分合理**:订单、库存、采购等服务分离清晰
- **API设计规范**RESTful风格基本遵循
- **数据一致性**:需要加强分布式事务管理
### 8.2 代码层面
- **前端API设计完整**:接口定义清晰
- **后端实现不完整**部分前端API无后端实现
- **错误处理**:需要统一异常处理机制
### 8.3 业务层面
- **核心流程完整**:订单处理基本流程存在
- **关键功能缺失**:售后、匹配等核心功能未实现
- **数据关联弱**:模块间数据联动不足
## 九、结论
### 9.1 总体评估
- **前端开发进度**85%API设计完整
- **后端开发进度**65%(核心功能缺失)
- **系统完整度**70%(关键模块未实现)
### 9.2 风险提示
1. **售后功能完全缺失** - 无法处理客户退货退款
2. **商品匹配功能缺失** - 无法关联平台与ERP商品
3. **财务数据不完整** - 影响财务报表准确性
4. **库存联动不明确** - 可能导致库存数据错误
### 9.3 建议行动
1. **立即启动售后模块开发**
2. **补全商品匹配功能**
3. **加强模块间数据联动**
4. **完善报表和统计功能**
---
**分析完成时间:** 2026-04-05 00:45
**分析工具:** Astron模型深度推理分析
**分析人员:** 子代理(深度分析任务)

View File

@ -0,0 +1,205 @@
# 幂等性检查与修复报告
**任务编号:** 2.4
**执行时间:** 2026-04-05
**涉及服务:** order-service, inventory-service, logistics-service
---
## 一、检查结果
### 1.1 订单创建 `POST /api/orders/pull`
- **检查结果:** ❌ 无幂等性保护
- **问题描述:** 重复调用 `pullOrders` / `createOrders` 会重复创建订单
- **现有字段:** `platformOrderSn`(平台订单号)存在,但无唯一约束和去重逻辑
### 1.2 库存扣减 `POST /api/stock/deduct`
- **检查结果:** ❌ 无幂等性保护
- **问题描述:** 网络重试导致重复扣减,可能出现库存为负数
- **现有保护:** 乐观锁 `@Version` 仅防止并发覆盖,不防止重复请求
### 1.3 发货回传 `POST /api/logistics/callback/{carrier}`
- **检查结果:** ⚠️ 部分幂等(轨迹去重,无运单状态幂等)
- **问题描述:** 同一运单状态更新回调重复处理,可能导致状态回退
- **现有保护:** `logisticsTraceMapper.countExist()` 对轨迹有去重,但运单状态更新无幂等
### 1.4 消息队列消费者
- **检查结果:** 无 MQ 消费者
- **说明:** 系统使用 Spring Event`StockChangedEventListener`)实现异步,非传统 MQ
---
## 二、修复方案
### 2.1 订单服务order-service
#### 新增文件
| 文件 | 说明 |
|------|------|
| `service/impl/OrderIdempotencyService.java` | Redis 幂等性工具类 |
#### 修改文件
| 文件 | 修改内容 |
|------|---------|
| `service/impl/OrderServiceImpl.java` | 注入 `OrderIdempotencyService`,修改 `createOrders()``shipOrder()` 添加幂等检查 |
#### 幂等键设计
```
订单创建order:idempotent:{platform}:{shopId}:{platformOrderSn}
订单发货order:idempotent:ship:{orderId}
```
#### 修复逻辑
1. **Redis 检查**:请求到达时先查 Redis 是否有处理记录
2. **DB 兜底**:数据库唯一约束 `uk_orders_platform_order_sn(platform_order_sn, platform, shop_id)` 防止并发插入
3. **幂等标记**:处理成功后写入 RedisTTL=24h
---
### 2.2 库存服务inventory-service
#### 新增文件
| 文件 | 说明 |
|------|------|
| `service/impl/StockIdempotencyService.java` | Redis 幂等性工具类 |
#### 修改文件
| 文件 | 修改内容 |
|------|---------|
| `controller/StockFeignController.java` | 注入 `StockIdempotencyService`,修改 `deductStock()` / `lockStock()` / `unlockStock()` 添加幂等检查 |
#### 幂等键设计
```
扣减库存stock:idempotent:{skuCode}:{warehouseId}:{relatedNo}
锁定库存stock:idempotent:lock:{skuCode}:{warehouseId}:{relatedNo}
解锁库存stock:idempotent:unlock:{skuCode}:{warehouseId}:{relatedNo}
```
#### 修复逻辑
1. **Redis 检查**:到达即查重
2. **幂等响应**:重复请求返回缓存结果而非错误,保证接口幂等
3. **DB 唯一索引**`uk_stocks_sku_warehouse(sku_code, warehouse_id)` 防止库存记录重复
---
### 2.3 物流服务logistics-service
#### 新增文件
| 文件 | 说明 |
|------|------|
| `service/impl/LogisticsIdempotencyService.java` | Redis 幂等性工具类 |
#### 修改文件
| 文件 | 修改内容 |
|------|---------|
| `service/TraceSyncService.java` | 注入 `LogisticsIdempotencyService`,修改 `processCallback()` 添加幂等检查 |
#### 幂等键设计
```
回调处理logistics:idempotent:{carrierCode}:{waybillNo}:{status}:{latestTraceTime}
```
#### 修复逻辑
1. **Redis 检查**同一运单同一状态回调在24h内不重复处理
2. **轨迹去重**:数据库已有 `uk_logistics_trace_waybill_time_location` 保障轨迹不重复
---
## 三、数据库变更SQL Migration
**文件:** `docs/database/V2.4__idempotency_indexes.sql`
```sql
-- orders 表:平台订单号+平台+店铺 唯一索引
ALTER TABLE orders
ADD CONSTRAINT uk_orders_platform_order_sn
UNIQUE (platform_order_sn, platform, shop_id);
-- stocks 表SKU+仓库 唯一索引
ALTER TABLE stocks
ADD CONSTRAINT uk_stocks_sku_warehouse
UNIQUE (sku_code, warehouse_id);
-- logistics_trace 表:运单号+轨迹时间+地点 唯一索引
ALTER TABLE logistics_trace
ADD CONSTRAINT uk_logistics_trace_waybill_time_location
UNIQUE (waybill_no, trace_time, location);
```
---
## 四、幂等性保证层次
```
┌─────────────────────────────────────────────────────┐
│ 接入层 │
│ 幂等Token / 请求ID可选未来扩展
├─────────────────────────────────────────────────────┤
│ Redis 层 │
│ SETNX 原子操作防并发重复请求TTL=24h
├─────────────────────────────────────────────────────┤
│ 数据库层 │
│ 唯一索引兜底UK(platform_order_sn,platform,shop_id) │
│ UK(sku_code, warehouse_id) │
│ UK(waybill_no, trace_time, location) │
├─────────────────────────────────────────────────────┤
│ 业务层 │
│ 状态机 + 乐观锁防止状态异常 │
└─────────────────────────────────────────────────────┘
```
---
## 五、影响范围
| 接口 | 服务 | 幂等键 | 影响 |
|------|------|--------|------|
| `POST /api/orders/pull` | order-service | `platform:shopId:platformOrderSn` | ✅ 已修复 |
| `POST /api/orders/{id}/ship` | order-service | `ship:orderId` | ✅ 已修复 |
| `POST /api/stock/deduct` | inventory-service | `skuCode:warehouseId:relatedNo` | ✅ 已修复 |
| `POST /api/stock/lock` | inventory-service | `lock:skuCode:warehouseId:relatedNo` | ✅ 已修复 |
| `POST /api/stock/unlock` | inventory-service | `unlock:skuCode:warehouseId:relatedNo` | ✅ 已修复 |
| `POST /api/logistics/callback/{carrier}` | logistics-service | `carrier:waybillNo:status:traceTime` | ✅ 已修复 |
---
## 六、注意事项
1. **Redis 连接**:所有服务 pom.xml 均已包含 `spring-boot-starter-data-redis`,无需额外依赖
2. **幂等TTL**:统一设为 24 小时,可根据业务调整
3. **数据库索引**:执行 SQL migration 时注意在线表操作影响,建议在低峰期执行
4. **消息队列**:当前系统无 MQ 消费者(使用 Spring Event无需额外处理
5. **测试建议**:使用 Postman/Curl 重复调用上述接口,验证幂等返回而非重复创建
---
## 七、修改文件清单
```
services/order-service/src/main/java/com/erp/order/service/impl/
+ OrderIdempotencyService.java [NEW]
services/order-service/src/main/java/com/erp/order/service/impl/OrderServiceImpl.java
[MOD] 添加 OrderIdempotencyService 依赖
[MOD] createOrders() 添加幂等检查
[MOD] shipOrder() 添加幂等检查
services/inventory-service/src/main/java/com/erp/inventory/service/impl/
+ StockIdempotencyService.java [NEW]
services/inventory-service/src/main/java/com/erp/inventory/controller/StockFeignController.java
[MOD] 添加 StockIdempotencyService 依赖
[MOD] deductStock() 添加幂等检查
[MOD] lockStock() 添加幂等检查
[MOD] unlockStock() 添加幂等检查
services/logistics-service/src/main/java/com/erp/logistics/service/impl/
+ LogisticsIdempotencyService.java [NEW]
services/logistics-service/src/main/java/com/erp/logistics/service/TraceSyncService.java
[MOD] 添加 LogisticsIdempotencyService 依赖
[MOD] processCallback() 添加幂等检查
docs/database/V2.4__idempotency_indexes.sql [NEW]
docs/幂等性修复报告.md [NEW]
```

View File

@ -0,0 +1,293 @@
# ERP微服务架构与数据库设计报告
## 文档信息
- **项目名称**ERP Java微服务系统
- **版本**V1.0
- **日期**2026-04-04
- **微服务数量**23个
- **数据库数量**13个
---
## 一、总体架构
### 1.1 微服务列表
| 序号 | 服务名称 | 服务说明 | 端口 | 数据库 |
|------|----------|----------|------|--------|
| 1 | user-service | 用户服务 | 8081 | user_db |
| 2 | order-service | 订单服务 | 8082 | order_db |
| 3 | product-service | 商品服务 | 8083 | product_db |
| 4 | inventory-service | 库存服务 | 8084 | inventory_db |
| 5 | warehouse-service | 仓库服务 | 8085 | warehouse_db |
| 6 | finance-service | 财务服务 | 8086 | finance_db |
| 7 | customer-service | 客户管理 | 8087 | customer_db |
| 8 | supplier-service | 供应商管理 | 8088 | supplier_db |
| 9 | purchase-service | 采购管理 | 8089 | purchase_db |
| 10 | invoice-service | 发票服务 | 8090 | invoice_db |
| 11 | notification-service | 通知服务 | 8091 | notification_db |
| 12 | ai-service | AI服务 | 8092 | ai_db |
| 13 | file-service | 文件服务 | 8093 | file_db |
| 14 | permission-service | 权限服务 | 8094 | permission_db |
| 15 | tenant-service | 租户服务 | 8095 | tenant_db |
| 16 | approval-flow-service | 审批流服务 | 8096 | approval_db |
| 17 | reconciliation-service | 对账服务 | 8097 | reconciliation_db |
| 18 | report-service | 报表服务 | 8098 | report_db |
| 19 | dashboard-service | 仪表盘服务 | 8099 | dashboard_db |
| 20 | system-tool-service | 系统工具服务 | 8100 | system_tool_db |
| 21 | admin-service | 超级管理员服务 | 8101 | admin_db |
| 22 | category-service | 分类服务 | 8102 | category_db |
| 23 | api-gateway | API网关 | 8000 | 无(路由) |
---
## 二、数据库详细设计
### 2.1 user_db用户服务
**服务**user-service
**主要表**
- users用户表
- roles角色表
- permissions权限表
- user_roles用户角色关联
- login_logs登录日志
**核心字段**id, username, password, email, phone, status, created_at
---
### 2.2 order_db订单服务
**服务**order-service
**主要表**
- orders订单主表
- order_items订单明细
- order_logs订单操作日志
- settlement_reports结算报表
- order_settlements订单结算
**核心字段**order_no, customer_id, total_amount, status, created_at
---
### 2.3 product_db商品服务
**服务**product-service
**主要表**
- goods商品表
- brands品牌表
- categories分类表
- goods_skuSKU规格
**核心字段**goods_no, name, price, stock, status
---
### 2.4 warehouse_db仓库服务
**服务**warehouse-service
**主要表**
- stock库存表
- stock_logs库存流水
- warehouses仓库表
- locations库位表
- inbound_order入库单
- outbound_order出库单
- transfer_order调拨单
- stock_alert_config预警配置
- stock_alert_notification预警通知
**核心字段**sku_id, warehouse_id, quantity, locked, reserved
---
### 2.5 finance_db财务服务
**服务**finance-service
**主要表**
- finance_transactions财务流水
- account_balances账户余额
- payments付款记录
- receipts收款记录
**核心字段**transaction_no, type, amount, balance_before, balance_after
---
### 2.6 customer_db客户管理
**服务**customer-service
**主要表**
- customers客户表
- customer_contacts联系人
- customer_addresses地址
- customer_follow_ups跟进记录
- customer_relationships客户关系
- customer_statistics统计
**核心字段**customer_no, name, level, phone, address
---
### 2.7 supplier_db供应商管理
**服务**supplier-service
**主要表**
- suppliers供应商表
- supplier_contacts供应商联系人
- supplier_bank_accounts银行账户
- supplier_ratings评级记录
- inquiry_sheets询价单
**核心字段**supplier_no, name, contact, phone, status
---
### 2.8 purchase_db采购管理
**服务**purchase-service
**主要表**
- purchase_orders采购订单
- purchase_inbound采购入库
- purchase_return采购退货
- suppliers供应商
**核心字段**purchase_no, supplier_id, total_amount, status
---
### 2.9 invoice_db发票服务
**服务**invoice-service
**主要表**
- invoices发票表
- invoice_items发票明细
**核心字段**invoice_no, customer_id, amount, type, status
---
### 2.10 notification_db通知服务
**服务**notification-service
**主要表**
- notifications通知消息
- notification_templates模板
**核心字段**title, content, type, recipient, status
---
### 2.11 ai_dbAI服务
**服务**ai-service
**主要表**
- ai_conversations对话记录
- ai_messages消息记录
- ai_usage_logs用量日志
**核心字段**conversation_id, model, prompt, response
---
### 2.12 file_db文件服务
**服务**file-service
**主要表**
- files文件表
- file_groups文件分组
**核心字段**file_name, file_path, file_size, mime_type
---
### 2.13 其他数据库
| 数据库 | 服务 | 主要表 |
|--------|------|--------|
| permission_db | permission-service | roles, permissions |
| tenant_db | tenant-service | tenants, packages |
| approval_db | approval-flow-service | audit_rules, audit_logs |
| reconciliation_db | reconciliation-service | reconciliation_bills |
| report_db | report-service | data_exports |
| dashboard_db | dashboard-service | dashboard_configs |
| system_tool_db | system-tool-service | system_configs, operation_logs |
---
## 三、技术栈
### 3.1 后端技术
- **框架**Spring Boot 3.2 / Spring Cloud Alibaba
- **ORM**MyBatis-Plus 3.5
- **注册中心**Nacos
- **配置中心**Nacos
- **消息队列**RocketMQ
- **分布式事务**SeataAT模式
- **API网关**Spring Cloud Gateway
- **数据库**MySQL 8.0
- **缓存**Redis
### 3.2 基础设施
- **容器化**Docker + Kubernetes
- **监控**SkyWalking
- **日志**ELK Stack
- **对象存储**MinIO
---
## 四、服务间调用关系
```
客户端
API Gateway认证、限流、路由
┌─────────────────────────────────────────┐
│ 业务微服务层 │
├─────────────────────────────────────────┤
│ user-service ←→ permission-service │
│ order-service ←→ inventory-service │
│ order-service ←→ finance-service │
│ order-service ←→ warehouse-service │
│ purchase-service ←→ supplier-service │
│ purchase-service ←→ warehouse-service │
│ purchase-service ←→ finance-service │
│ customer-service ←→ order-service │
└─────────────────────────────────────────┘
分布式事务Seata
消息队列RocketMQ
```
---
## 五、部署架构
### 5.1 Kubernetes部署
- 每个微服务独立Deployment
- HPA自动扩缩容
- 探针健康检查
- 资源限制CPU/内存)
### 5.2 数据库部署
- 每个数据库独立Pod
- 主从复制
- 定期备份
---
## 六、后续规划
1. **服务合并**:考虑将小服务合并,减少服务数量
2. **读写分离**:对订单、商品等大表做读写分离
3. **分库分表**:对超大数据量表做分片
4. **缓存优化**增加Redis缓存减少数据库压力
5. **API文档**完善Swagger/OpenAPI文档
---
**文档生成时间**2026-04-04
**生成人**丫头AI助手

View File

@ -0,0 +1,303 @@
# 总控与租户功能检查与修复报告
> 检查时间2026-04-05
> 检查范围:`tenant-service`8083+ `admin-service`(不存在,已合并到 tenant-service
---
## 一、检查结果总览
| 检查项 | 状态 | 说明 |
|--------|------|------|
| 1. 租户注册接口创建记录并发送通知 | ❌ 未完成 | 创建租户后无任何通知发送 |
| 2. 总控审核接口更新状态并初始化套餐 | ❌ 不存在 | 只有管理接口,无审核工作流 |
| 3. 套餐变更接口推送到Nacos | ❌ 未完成 | 套餐修改仅写数据库未推送Nacos |
| 4. 功能开关接口动态刷新 | ❌ 未完成 | 无功能开关服务,仅有系统配置 |
| 5. API调用量监控记录 | ❌ 未完成 | api-gateway为空目录无计数器 |
---
## 二、详细检查
### 2.1 租户注册接口 - 通知缺失
**文件:** `tenant-service/src/main/java/com/erp/tenant/service/impl/TenantServiceImpl.java`
**问题:** `createTenant()` 方法仅执行数据库写入,无任何通知逻辑。
```java
// 当前实现(只有数据库写入)
tenantRepository.insert(tenant);
log.info("创建租户成功: id={}, code={}", tenant.getId(), tenant.getCode());
return TenantDetailResponse.fromEntity(tenant, null);
// 无通知发送
```
**修复:** 注入 `NotificationFeignClient`,创建后发送:
- 系统通知Redis队列
- 邮件通知(管理员邮箱)
- 钉钉通知(管理员群)
---
### 2.2 总控审核接口 - 不存在
**现状:** `TenantAdminController` 只有以下接口:
| 接口 | 功能 |
|------|------|
| `GET /api/admin/tenants` | 租户列表 |
| `POST /api/admin/tenants` | 创建租户 |
| `PUT /api/admin/tenants/{id}` | 更新租户 |
| `POST /api/admin/tenants/{id}/toggle-status` | 切换状态 |
| `POST /api/admin/tenants/{id}/suspend` | 暂停租户 |
| `POST /api/admin/tenants/{id}/activate` | 恢复租户 |
**缺失:** 无审核工作流接口approve/reject
**Tenant实体状态常量**
```java
STATUS_ACTIVE = 1 // 正常
STATUS_INACTIVE = 0 // 禁用
STATUS_SUSPENDED = -1 // 暂停
// 缺失 STATUS_PENDING_REVIEW = 2 ← 已补充
```
---
### 2.3 套餐变更接口 - 未推送Nacos
**文件:** `tenant-service/src/main/java/com/erp/tenant/service/impl/TenantServiceImpl.java`
**问题:** `updatePackage()``togglePackageStatus()` 只修改数据库无Nacos推送。
```java
// 当前实现
packageRepository.updateById(pkg);
log.info("更新套餐成功: id={}", id);
return PackageResponse.fromEntity(pkg);
// 无Nacos推送
```
---
### 2.4 功能开关接口 - 不存在
**现状:** 只有 `SystemConfigService`普通配置Redis缓存无功能开关动态刷新机制。
**缺失:**
- FeatureFlag 实体
- FeatureFlagService动态刷新
- FeatureFlagController
---
### 2.5 API调用量监控 - 网关为空
**现状:** `services/api-gateway/` 目录仅有 `README.md`无任何Java代码。
**缺失:** 无请求计数器、无统计接口。
---
## 三、修复实施
### 3.1 修复1租户注册发送通知
**新增文件:**
- `notification-service/src/main/java/com/erp/notification/NotificationServiceApplication.java`
- `notification-service/src/main/java/com/erp/notification/service/NotificationService.java`
- `notification-service/src/main/java/com/erp/notification/service/impl/NotificationServiceImpl.java`
- `notification-service/src/main/java/com/erp/notification/controller/NotificationController.java`
- `notification-service/src/main/java/com/erp/notification/dto/NotificationRequest.java`
- `notification-service/src/main/java/com/erp/notification/client/NotificationClient.java`
- `notification-service/src/main/resources/application.yml`
- `tenant-service/src/main/java/com/erp/tenant/feign/NotificationFeignClient.java`
**修改文件:**
- `TenantServiceApplication.java` - 添加 `@EnableFeignClients`
- `TenantServiceImpl.java` - `createTenant()` 方法末尾添加通知发送
**通知类型:**
1. **系统通知** → Redis队列 `notifications:tenant:{tenantId}`
2. **邮件通知** → 管理员邮箱(可配置)
3. **钉钉通知** → 管理员群(可配置)
---
### 3.2 修复2新增总控审核接口
**新增文件:**
- `tenant-service/src/main/java/com/erp/tenant/service/nacos/NacosConfigService.java`
**修改文件:**
- `TenantAdminController.java` - 新增审核接口
- `TenantService.java` - 新增方法声明
- `TenantServiceImpl.java` - 实现审核逻辑
- `Tenant.java` - 新增 `STATUS_PENDING_REVIEW = 2`
**新增审核接口:**
| 方法 | 路径 | 功能 |
|------|------|------|
| GET | `/api/admin/tenants/pending-review` | 获取待审核列表 |
| POST | `/api/admin/tenants/{id}/approve` | 审核通过 |
| POST | `/api/admin/tenants/{id}/reject` | 审核拒绝 |
**审核通过操作:**
1. 将租户状态更新为 `STATUS_ACTIVE`
2. 初始化套餐配置到 Redis`tenant:package:{id}:info/features/limits`
3. 推送套餐配置到 Nacos
4. 发送审核通过通知(邮件 + 系统通知)
**审核拒绝操作:**
1. 将租户状态更新为 `STATUS_INACTIVE`
2. 发送审核拒绝通知(含拒绝原因)
---
### 3.3 修复3套餐变更推送到Nacos
**新增文件:**
- `tenant-service/src/main/java/com/erp/tenant/service/nacos/NacosConfigService.java`
**修改文件:**
- `TenantServiceImpl.java` - `updatePackage()``togglePackageStatus()` 调用 `pushPackageChangeToNacos()`
- `TenantAdminController.java` - 新增 `POST /api/admin/packages/{id}/push-nacos` 手动推送接口
**Nacos推送内容**
```yaml
# tenant-service推送的套餐配置data-id格式: tenant-{tenantId}-package-{packageId}
package:
id: 1
code: "basic"
name: "基础版"
type: month
status: 1
price: 299.00
period_days: 30
trial_days: 7
features: '{"order":true,"stock":true,"report":false}'
limits: '{"users":10,"storage":5}'
updated_at: 2026-04-05T10:00:00
```
**Nacos推送方法**
- `publishTenantPackageConfig(Long tenantId, Long packageId, String configJson)`
- `publishFeatureFlag(Long tenantId, String featureKey, boolean enabled)`
---
### 3.4 修复4功能开关动态刷新
**新增文件:**
- `tenant-service/src/main/java/com/erp/tenant/entity/feature/FeatureFlag.java`
- `tenant-service/src/main/java/com/erp/tenant/repository/feature/FeatureFlagRepository.java`
- `tenant-service/src/main/java/com/erp/tenant/service/feature/FeatureFlagService.java`
- `tenant-service/src/main/java/com/erp/tenant/controller/feature/FeatureFlagController.java`
**功能开关接口:**
| 方法 | 路径 | 功能 |
|------|------|------|
| GET | `/api/admin/features` | 分页查询功能开关 |
| GET | `/api/admin/features/tenant/{tenantId}` | 获取租户功能开关Map |
| GET | `/api/admin/features/tenant/{tenantId}/check/{featureKey}` | 检查功能是否启用 |
| POST | `/api/admin/features` | 创建功能开关 |
| PUT | `/api/admin/features/{id}` | 更新功能开关 |
| POST | `/api/admin/features/{id}/toggle` | 切换状态(动态刷新) |
| DELETE | `/api/admin/features/{id}` | 删除功能开关 |
| POST | `/api/admin/features/refresh-cache` | 刷新所有缓存 |
**动态刷新机制:**
1. `toggleFeatureFlag()` 修改状态后
2. 调用 `refreshFeatureCache()` 清除Redis缓存
3. 调用 `pushFeatureToNacos()` 推送到Nacos
4. 各微服务监听Nacos变化动态更新本地配置
---
### 3.5 修复5API网关计数器
**新增文件:**
- `api-gateway/pom.xml`
- `api-gateway/src/main/java/com/erp/gateway/ApiGatewayApplication.java`
- `api-gateway/src/main/java/com/erp/gateway/filter/ApiCounterFilter.java`
- `api-gateway/src/main/java/com/erp/gateway/controller/GatewayStatsController.java`
- `api-gateway/src/main/resources/application.yml`
**计数器功能(`ApiCounterFilter`**
1. **全局计数器**:按分钟/小时/天粒度统计总调用量
2. **租户维度**:每个租户的调用量和错误数
3. **端点维度**每个API路径的调用量和错误数
4. **请求日志**保留最近1000条完整请求记录含路径、方法、租户、状态码、响应时间
5. **P95/P99计算**:记录每个端点的响应时间分布
**Redis存储结构**
```
gateway:api:minute:2026-04-05-10-30 → ZSET (total: 1523)
gateway:api:hour:2026-04-05-10 → ZSET (total: 15234)
gateway:api:day:2026-04-05 → ZSET (total: 152340)
gateway:api:tenant:123:2026-04-05 → ZSET (count: 500, errors: 3)
gateway:api:endpoint:/api/orders:2026-04-05 → ZSET (count: 1000, errors: 5)
gateway:api:requests:2026-04-05 → LIST (最近1000条JSON日志)
gateway:api:duration:/api/orders:2026-04-05 → LIST (时间戳:响应时间)
```
**统计接口GatewayStatsController**
| 方法 | 路径 | 功能 |
|------|------|------|
| GET | `/api/gateway/stats/daily` | 每日调用统计 |
| GET | `/api/gateway/stats/tenants` | 租户调用排行 |
| GET | `/api/gateway/stats/endpoints` | 端点调用排行 |
| GET | `/api/gateway/stats/realtime` | 实时统计(当前小时) |
| GET | `/api/gateway/stats/logs` | 最近请求日志 |
---
## 四、修改文件清单
### 新增文件16个
| 文件 | 说明 |
|------|------|
| `notification-service/.../NotificationServiceApplication.java` | 通知服务启动类 |
| `notification-service/.../NotificationService.java` | 通知服务接口 |
| `notification-service/.../NotificationServiceImpl.java` | 通知服务实现(钉钉/邮件/短信/系统通知) |
| `notification-service/.../NotificationController.java` | 通知REST接口 |
| `notification-service/.../NotificationRequest.java` | 通知请求DTO |
| `notification-service/.../NotificationClient.java` | Feign客户端 |
| `notification-service/.../application.yml` | 通知服务配置 |
| `tenant-service/.../NotificationFeignClient.java` | 租户服务调用通知服务的Feign客户端 |
| `tenant-service/.../NacosConfigService.java` | Nacos配置推送服务 |
| `tenant-service/.../feature/FeatureFlag.java` | 功能开关实体 |
| `tenant-service/.../feature/FeatureFlagRepository.java` | 功能开关数据访问 |
| `tenant-service/.../FeatureFlagService.java` | 功能开关服务(含动态刷新) |
| `tenant-service/.../FeatureFlagController.java` | 功能开关REST接口 |
| `api-gateway/pom.xml` | API网关Maven配置 |
| `api-gateway/.../ApiGatewayApplication.java` | API网关启动类 |
| `api-gateway/.../ApiCounterFilter.java` | API调用量计数器过滤器 |
| `api-gateway/.../GatewayStatsController.java` | 网关统计接口 |
| `api-gateway/.../application.yml` | API网关配置 |
### 修改文件6个
| 文件 | 修改内容 |
|------|----------|
| `TenantServiceApplication.java` | 添加 `@EnableFeignClients` |
| `TenantServiceImpl.java` | 添加通知发送、Nacos推送、审核接口实现 |
| `TenantAdminController.java` | 新增审核接口、手动推送Nacos接口 |
| `TenantService.java` | 新增 approveTenant/rejectTenant/pushPackageToNacos 方法声明 |
| `Tenant.java` | 新增 `STATUS_PENDING_REVIEW = 2` 状态常量 |
| `notification-service/src/main/resources/bootstrap.yml` | 重命名为 application.ymlNacos配置整合 |
---
## 五、后续建议
1. **notification-service** 需要实现真实的邮件发送(目前为模拟日志)
2. **Nacos配置** 需要在 Nacos 控制台确认 `TENANT_CONFIG``TENANT_FEATURE` 分组存在
3. **api-gateway** 路由配置 `gateway-routes.yaml` 需要在 Nacos 中创建
4. **feature_flags** 数据表需要创建SQL建表语句待补充
5. **审核工作流**建议增加:拒绝原因必填、审核日志记录、审核超时提醒

103
docs/缺失功能清单.md Normal file
View File

@ -0,0 +1,103 @@
# ERP系统缺失功能清单按优先级排序
## 🔴 P0优先级 - 核心功能缺失(立即修复)
### 1. 售后管理模块
**状态:** 完全缺失
**前端API** 8个函数afterSale.ts
**后端实现:** 无对应Controller
**影响:** 无法处理退货、退款、换货等售后业务
**建议:** 创建 `after-sale-service`,实现售后单全流程
### 2. 订单商品匹配功能
**状态:** 完全缺失
**前端API** `matchOrder()` 存在
**后端实现:** 无对应端点
**影响:** 无法关联平台SKU与ERP商品
**建议:** 在 `order-service` 添加商品匹配接口
### 3. 待匹配订单查询
**状态:** 完全缺失
**前端API** `getPendingMatchOrders()` 存在
**后端实现:** 无对应端点
**影响:** 无法查看需要商品匹配的订单列表
**建议:** 添加 `GET /api/orders/pending-match` 接口
### 4. 待审核订单专用接口
**状态:** 完全缺失
**前端API** `getPendingAuditOrders()` 存在
**后端实现:** 无专用端点
**影响:** 审核效率低,需通过通用接口过滤
**建议:** 添加 `GET /api/orders/pending-audit` 接口
## 🟡 P1优先级 - 重要功能缺失(近期优化)
### 1. 财务与订单完整联动
**状态:** 部分实现
**发现:** 财务服务接口存在,但订单模块集成不完整
**影响:** 财务数据不完整,影响报表准确性
**建议:** 实现订单完成自动生成应收款
### 2. 库存预警机制
**状态:** 待确认
**影响:** 无法及时预警低库存情况
**建议:** 实现库存阈值预警和自动提醒
### 3. 库存锁定机制
**状态:** 待确认
**影响:** 可能发生超卖情况
**建议:** 实现订单审核后锁定库存
### 4. 报表系统完善
**状态:** 部分实现
**前端API** 5个报表函数
**后端实现:** 部分报表功能
**建议:** 完善销售、采购、库存、财务完整报表
## 🟢 P2优先级 - 优化功能缺失(长期规划)
### 1. 工作流引擎
**状态:** 缺失
**影响:** 审批流程固化,无法灵活配置
**建议:** 实现可视化工作流配置
### 2. 缓存机制优化
**状态:** 待优化
**影响:** 大数据量查询性能
**建议:** 实现Redis缓存优化热点数据
### 3. 异步处理优化
**状态:** 基础实现
**发现:** 使用RocketMQ进行消息队列
**建议:** 完善失败重试和补偿机制
## ✅ 已验证功能
### 1. 库存扣减机制
**状态:** 已实现
**位置:** `OrderServiceImpl.deductStockForOrder()`
**机制:** 通过`StockClient`服务间调用
### 2. 采购财务联动
**状态:** 部分实现
**位置:** `PurchaseOrderServiceImpl`调用`FinanceFeignClient`
**状态:** 采购模块有财务联动基础
## 📊 开发进度统计
| 模块 | 前端进度 | 后端进度 | 状态 |
|------|---------|---------|------|
| 订单模块 | 100% (23/23) | 87% (18/21) | 核心功能完整 |
| 库存模块 | 100% (15/15) | 100% (5/5) | 功能完整 |
| 售后模块 | 100% (8/8) | 0% (0/8) | **严重缺失** |
| 报表模块 | 100% (5/5) | 60% (3/5) | 部分实现 |
| 财务模块 | 20% (1/5) | 100% (5/5) | 后端完整 |
**总体评估:**
- 前端API设计85% 完整
- 后端功能实现65% 完整
- 系统完整度70%
---
**生成时间:** 2026-04-05
**数据来源:** 前后端代码深度分析

View File

@ -0,0 +1,577 @@
# 自建物流功能检查与修复报告
**检查时间:** 2026-04-05
**服务模块:** logistics-service
**关联服务:** scheduled-task-service
---
## 一、检查结果汇总
| # | 检查项 | 状态 | 说明 |
|---|--------|------|------|
| 1 | 快递公司适配器轮询拉取 | ⚠️ 框架就绪API未对接 | 4个适配器已创建但`queryTraces()`均返回空 |
| 2 | 回调接收接口 | ✅ 已实现 | `POST /api/logistics/callback/{carrier}` |
| 3 | logistics_trace表分区与索引 | ❌ 无分区,有索引 | 表无RANGE LIST分区 |
| 4 | 异常检测定时任务 | ❌ 不存在 | scheduled-task-service中无物流异常检测任务 |
| 5 | 物流时间轴接口 | ⚠️ 接口存在,数据为空 | 依赖轨迹拉取,拉取不到数据则为空 |
---
## 二、详细检查结果
### 2.1 快递公司适配器 — 框架就绪API待对接
**已实现的适配器4个**
- `SfCarrierAdapter` — 顺丰速运
- `YtoCarrierAdapter` — 圆通速递
- `ZtoCarrierAdapter` — 中通快递
- `YundaCarrierAdapter` — 韵达快递
**公共能力(`AbstractCarrierAdapter`**
- `verifyMd5Sign()` — MD5签名验证
- `parseTime()` — 时间解析(支持 yyyy-MM-dd HH:mm:ss 和 yyyy-MM-dd
- `toJson()` — JSON序列化
**关键问题:`queryTraces()` 均未实现,示例:**
```java
// SfCarrierAdapter.queryTraces()
@Override
public List<LogisticsTrace> queryTraces(String waybillNo) {
log.info("[顺丰] 查询轨迹, 运单号: {}", waybillNo);
List<LogisticsTrace> traces = new ArrayList<>();
try {
// TODO: 调用顺丰镖局API
log.warn("[顺丰] API对接待实现当前返回空轨迹");
} catch (Exception e) {
log.error("[顺丰] 查询轨迹异常: {}", waybillNo, e);
}
return traces; // ← 返回空列表
}
```
**轮询拉取调度逻辑(`TraceSyncService`**
- ✅ `@Scheduled(fixedDelayString = "${logistics.sync.interval-minutes:30}000")` 每30分钟批量同步
- ✅ `syncPending()` — 查询`need_sync=1`且`sync_status IN (0,3)`的运单
- ✅ `syncSingle()` — 单条同步,支持重试(`@Retryable(maxAttempts=3)`
- ✅ 去重逻辑:根据`waybill_no + trace_time + location`判断是否重复
---
### 2.2 回调接收接口 — ✅ 已完整实现
**接口路径:** `POST /api/logistics/callback/{carrier}`
```java
@PostMapping("/callback/{carrier}")
public ApiResponse<Void> callback(
@PathVariable String carrier,
@RequestBody LogisticsCallbackRequest request) {
request.setCarrier(carrier);
boolean success = traceSyncService.processCallback(carrier, request.getData());
// ...
}
```
**处理流程:**
1. 签名验证(`verifySign()`)— 失败不阻断,仍处理
2. 解析回调(`parseCallback()`)— 各适配器自定义字段映射
3. 保存/更新运单状态(`WaybillStatus`
4. 去重插入轨迹(`LogisticsTrace`
---
### 2.3 logistics_trace表 — ❌ 无分区
**现有DDL`logistics-service/src/main/resources/db/init.sql`**
```sql
CREATE TABLE IF NOT EXISTS logistics_trace (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
waybill_no VARCHAR(50) NOT NULL,
carrier VARCHAR(20) NOT NULL,
status VARCHAR(30) DEFAULT '',
status_label VARCHAR(50) DEFAULT '',
location VARCHAR(200) DEFAULT '',
description VARCHAR(500) DEFAULT '',
trace_time DATETIME NOT NULL,
raw_status_code VARCHAR(50) DEFAULT '',
raw_data JSON DEFAULT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
deleted TINYINT(1) DEFAULT 0,
INDEX idx_waybill_no (waybill_no),
INDEX idx_carrier (carrier),
INDEX idx_trace_time (trace_time),
INDEX idx_waybill_trace (waybill_no, trace_time)
);
```
**问题:**
- ❌ 无分区字段(建议按`trace_time`做RANGE分区或按`carrier`做LIST分区
- ⚠️ `logistics_trace` 表名与实体类`@TableName("logistics_trace")`不匹配(实体是`logistics_trace`,而文档中检查的`logistics_traces`不存在)
---
### 2.4 异常检测定时任务 — ❌ 不存在
**检查范围:** `scheduled-task-service` 全部源码
在`scheduled_task`表中无任何物流异常检测任务记录也无Java代码实现。
---
### 2.5 物流时间轴接口 — ⚠️ 接口存在,数据依赖轨迹拉取
**接口路径:** `GET /api/logistics/trace/{waybillNo}`
```java
public TraceResponse getTraces(String waybillNo) {
List<LogisticsTrace> traces = logisticsTraceMapper.selectByWaybillNoOrderByTime(waybillNo);
if (traces.isEmpty()) {
traceSyncService.syncSingle(waybillNo, null, false); // 尝试同步
traces = logisticsTraceMapper.selectByWaybillNoOrderByTime(waybillNo);
}
if (traces.isEmpty()) return null; // ← 无数据时返回null
// 构建TraceResponse...
}
```
**问题:** 由于`queryTraces()`返回空,同步后仍无数据,时间轴为空。
---
## 三、修复方案
### 3.1 补充DDL分区 + 索引完善
**文件:** `services/logistics-service/src/main/resources/db/migration/V2__add_partition_and_indexes.sql`
```sql
-- =============================================
-- 物流轨迹表分区与索引补充
-- 适用于 MySQL 8.0+
-- 注意分区表不支持直接ALTER需要重建表
-- =============================================
-- Step 1: 创建带分区的影子表
CREATE TABLE IF NOT EXISTS logistics_trace_partitioned (
id BIGINT AUTO_INCREMENT,
waybill_no VARCHAR(50) NOT NULL,
carrier VARCHAR(20) NOT NULL,
status VARCHAR(30) DEFAULT '' COMMENT '轨迹节点状态',
status_label VARCHAR(50) DEFAULT '' COMMENT '状态标签',
location VARCHAR(200) DEFAULT '' COMMENT '轨迹发生地点',
description VARCHAR(500) DEFAULT '' COMMENT '轨迹描述',
trace_time DATETIME NOT NULL COMMENT '轨迹发生时间',
raw_status_code VARCHAR(50) DEFAULT '' COMMENT '物流商原始状态码',
raw_data JSON DEFAULT NULL COMMENT '原始轨迹数据',
created_at DATETIME DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
deleted TINYINT(1) DEFAULT 0 COMMENT '逻辑删除',
PRIMARY KEY (id, trace_time), -- 分区字段必须在主键中
INDEX idx_waybill_no (waybill_no),
INDEX idx_carrier (carrier),
INDEX idx_trace_time (trace_time),
INDEX idx_waybill_trace (waybill_no, trace_time),
INDEX idx_waybill_status (waybill_no, status),
INDEX idx_status (status)
)
ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
PARTITION BY RANGE (YEAR(trace_time) * 100 + MONTH(trace_time)) (
PARTITION p202601 VALUES LESS THAN (202602),
PARTITION p202602 VALUES LESS THAN (202603),
PARTITION p202603 VALUES LESS THAN (202604),
PARTITION p202604 VALUES LESS THAN (202605),
PARTITION p202605 VALUES LESS THAN (202606),
PARTITION p202606 VALUES LESS THAN (202607),
PARTITION p202607 VALUES LESS THAN (202608),
PARTITION p202608 VALUES LESS THAN (202609),
PARTITION p202609 VALUES LESS THAN (202610),
PARTITION p202610 VALUES LESS THAN (202611),
PARTITION p202611 VALUES LESS THAN (202612),
PARTITION p202612 VALUES LESS THAN (202701),
PARTITION p_future VALUES LESS THAN MAXVALUE
) COMMENT='物流轨迹记录表(分区版)';
-- Step 2: 迁移数据
INSERT INTO logistics_trace_partitioned
SELECT * FROM logistics_trace WHERE deleted = 0;
-- Step 3: 重命名(需要窗口期,建议在低峰期操作)
-- RENAME TABLE logistics_trace TO logistics_trace_old,
-- logistics_trace_partitioned TO logistics_trace;
-- =============================================
-- logistics_waybill_status 索引补充
-- =============================================
CREATE INDEX IF NOT EXISTS idx_waybill_carrier_status
ON logistics_waybill_status (waybill_no, carrier, status);
CREATE INDEX IF NOT EXISTS idx_sync_need
ON logistics_waybill_status (need_sync, sync_status, sync_retry_count);
-- =============================================
-- 异常检测视图(方便异常查询)
-- =============================================
CREATE OR REPLACE VIEW v_logistics_exception AS
SELECT
w.id,
w.waybill_no,
w.carrier,
CASE w.carrier
WHEN 'SF' THEN '顺丰速运'
WHEN 'YTO' THEN '圆通速递'
WHEN 'ZTO' THEN '中通快递'
WHEN 'YUNDA' THEN '韵达快递'
ELSE w.carrier
END AS carrier_name,
w.status,
w.status_label,
w.location,
w.description,
w.last_trace_time,
w.sync_status,
w.sync_fail_reason,
w.order_id,
w.order_no,
w.receiver_name,
w.receiver_phone,
TIMESTAMPDIFF(HOUR, w.last_trace_time, NOW()) AS stale_hours,
CASE
WHEN w.status = 'EXCEPTION' THEN '运单异常'
WHEN w.sync_status = 3 THEN '同步失败'
WHEN w.status NOT IN ('SIGNED','RETURNED') AND TIMESTAMPDIFF(HOUR, w.last_trace_time, NOW()) > 48 THEN '停滞预警'
ELSE '正常'
END AS exception_type
FROM logistics_waybill_status w
WHERE w.deleted = 0;
```
---
### 3.2 创建物流异常检测定时任务
**文件:** `services/logistics-service/src/main/java/com/erp/logistics/job/LogisticsExceptionDetectJob.java`
```java
package com.erp.logistics.job;
import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper;
import com.erp.logistics.entity.WaybillStatus;
import com.erp.logistics.mapper.WaybillStatusMapper;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import java.time.LocalDateTime;
import java.util.List;
/**
* 物流异常检测定时任务
*
* 检测以下异常:
* 1. 运单状态为 EXCEPTION物流商报告异常
* 2. 同步失败次数超过阈值
* 3. 运单停滞超过48小时无新轨迹未签收且未退回
*/
@Slf4j
@Component
@RequiredArgsConstructor
public class LogisticsExceptionDetectJob {
private final WaybillStatusMapper waybillStatusMapper;
@Value("${logistics.exception.stale-hours:48}")
private Integer staleHours;
@Value("${logistics.exception.max-retry:5}")
private Integer maxRetry;
/**
* 每小时检测一次
* 触发时间每小时的第5分钟
*/
@Scheduled(cron = "0 5 * * * ?")
public void detectExceptions() {
log.info("[异常检测] 开始物流异常检测");
long startTime = System.currentTimeMillis();
try {
detectAbnormalStatus(); // 检测异常状态运单
detectSyncFailures(); // 检测同步失败运单
detectStaleWaybills(); // 检测停滞运单
} finally {
log.info("[异常检测] 完成,耗时: {}ms", System.currentTimeMillis() - startTime);
}
}
/**
* 检测异常状态运单status = EXCEPTION
*/
private void detectAbnormalStatus() {
LambdaQueryWrapper<WaybillStatus> wrapper = new LambdaQueryWrapper<WaybillStatus>()
.eq(WaybillStatus::getStatus, "EXCEPTION")
.eq(WaybillStatus::getDeleted, 0);
List<WaybillStatus> exceptions = waybillStatusMapper.selectList(wrapper);
for (WaybillStatus waybill : exceptions) {
log.warn("[异常检测] 运单异常: waybillNo={}, carrier={}, status={}, location={}, description={}",
waybill.getWaybillNo(), waybill.getCarrier(),
waybill.getStatusLabel(), waybill.getLocation(),
waybill.getDescription());
// TODO: 触发告警通知(发送邮件/短信/站内消息)
}
if (!exceptions.isEmpty()) {
log.info("[异常检测] 发现 {} 条异常状态运单", exceptions.size());
}
}
/**
* 检测同步失败运单sync_status = 3 且重试次数超限)
*/
private void detectSyncFailures() {
LambdaQueryWrapper<WaybillStatus> wrapper = new LambdaQueryWrapper<WaybillStatus>()
.eq(WaybillStatus::getSyncStatus, 3)
.ge(WaybillStatus::getSyncRetryCount, maxRetry)
.eq(WaybillStatus::getDeleted, 0);
List<WaybillStatus> failures = waybillStatusMapper.selectList(wrapper);
for (WaybillStatus waybill : failures) {
log.error("[异常检测] 同步失败: waybillNo={}, carrier={}, retryCount={}, reason={}",
waybill.getWaybillNo(), waybill.getCarrier(),
waybill.getSyncRetryCount(), waybill.getSyncFailReason());
// TODO: 触发告警,通知运维或客服
}
if (!failures.isEmpty()) {
log.info("[异常检测] 发现 {} 条同步失败运单", failures.size());
}
}
/**
* 检测停滞运单48小时无新轨迹且未签收/退回)
*/
private void detectStaleWaybills() {
LocalDateTime threshold = LocalDateTime.now().minusHours(staleHours);
LambdaQueryWrapper<WaybillStatus> wrapper = new LambdaQueryWrapper<WaybillStatus>()
.notIn(WaybillStatus::getStatus, "SIGNED", "RETURNED", "RETURNING")
.eq(WaybillStatus::getDeleted, 0)
.and(w -> w
.isNull(WaybillStatus::getLastTraceTime)
.or()
.lt(WaybillStatus::getLastTraceTime, threshold)
);
List<WaybillStatus> stale = waybillStatusMapper.selectList(wrapper);
for (WaybillStatus waybill : stale) {
long hours = waybill.getLastTraceTime() == null
? -1
: java.time.Duration.between(waybill.getLastTraceTime(), LocalDateTime.now()).toHours();
log.warn("[异常检测] 运单停滞: waybillNo={}, carrier={}, lastTraceTime={}, 停滞={}小时",
waybill.getWaybillNo(), waybill.getCarrier(),
waybill.getLastTraceTime(), hours);
// TODO: 触发告警或自动重新拉取
}
if (!stale.isEmpty()) {
log.info("[异常检测] 发现 {} 条停滞运单(超过{}小时无更新)", stale.size(), staleHours);
}
}
}
```
**配置项(追加到 `application.yml`**
```yaml
logistics:
exception:
stale-hours: 48 # 停滞阈值(小时)
max-retry: 5 # 同步失败重试上限
detect-cron: "0 5 * * * ?" # 异常检测Cron表达式每小时第5分钟
```
---
### 3.3 在 scheduled-task-service 中注册异常检测任务
**文件:** `services/logistics-service/src/main/resources/db/migration/V3__register_logistics_tasks.sql`
```sql
-- =============================================
-- 注册物流异常检测定时任务到 scheduled_task 表
-- =============================================
INSERT INTO scheduled_task (
task_name,
description,
task_group,
cron_expression,
task_class,
method_name,
task_params,
status,
concurrent,
sync,
task_type,
max_retries,
retry_interval,
timeout,
alert_enabled,
misfire_policy,
owner,
created_at,
updated_at
) VALUES (
'logistics_exception_detect',
'物流异常检测任务检测EXCEPTION状态、同步失败、停滞运单每小时执行一次',
'LOGISTICS',
'0 5 * * * ?',
'com.erp.logistics.job.LogisticsExceptionDetectJob',
'detectExceptions',
'{"staleHours":48,"maxRetry":5}',
'RUNNING',
TRUE,
FALSE,
'BEAN',
3,
60,
300,
TRUE,
'DO_NOTHING',
'system',
NOW(),
NOW()
);
-- 轨迹批量同步任务(已有@Scheduled这里补充DB记录
INSERT INTO scheduled_task (
task_name,
description,
task_group,
cron_expression,
task_class,
method_name,
task_params,
status,
concurrent,
sync,
task_type,
max_retries,
retry_interval,
timeout,
alert_enabled,
misfire_policy,
owner,
created_at,
updated_at
) VALUES (
'logistics_trace_sync',
'物流轨迹批量同步每30分钟同步一次待同步运单轨迹',
'LOGISTICS',
'0 */30 * * * ?',
'com.erp.logistics.service.TraceSyncService',
'syncPending',
'{"batchSize":100,"maxRetry":3}',
'RUNNING',
TRUE,
FALSE,
'BEAN',
3,
60,
600,
TRUE,
'DO_NOTHING',
'system',
NOW(),
NOW()
) ON DUPLICATE KEY UPDATE
description = VALUES(description),
cron_expression = VALUES(cron_expression),
task_params = VALUES(task_params),
status = 'RUNNING';
```
---
### 3.4 补充物流时间轴增强接口
**文件:** `services/logistics-service/src/main/java/com/erp/logistics/dto/response/TraceResponse.java`(需补充字段)
```java
// 追加以下字段到 TraceResponse
private Integer totalTraces; // 轨迹总节点数
private String estimatedDelivery; // 预计送达时间
private Boolean isOnTime; // 是否准时
private Integer transitDays; // 在途天数
private List<String> transitCities; // 途经城市列表
```
**增强接口:`LogisticsController` 追加**
```java
@GetMapping("/trace/timeline/{waybillNo}")
@Operation(summary = "物流时间轴(增强版)", description = "返回物流轨迹时间轴,包含预计送达、分段统计等")
public ApiResponse<TraceResponse> getTraceTimeline(
@Parameter(description = "运单号") @PathVariable String waybillNo) {
TraceResponse trace = logisticsService.getTracesEnhanced(waybillNo);
if (trace == null) {
return ApiResponse.notFound("未找到物流轨迹");
}
return ApiResponse.success(trace);
}
```
---
## 四、修复优先级
| 优先级 | 修复项 | 影响 | 工作量 |
|--------|--------|------|--------|
| P0 | 3.1 DDL分区补充 | 数据库性能、长期运维 | 中 |
| P0 | 3.2 异常检测Job | 运单异常无法发现 | 低 |
| P1 | 3.3 定时任务注册 | 异常检测Job无法自动触发 | 低 |
| P2 | 3.4 时间轴增强 | 接口体验 | 低 |
| P2 | 快递公司API对接 | 轨迹数据拉取需第三方API密钥 | 高(需商务对接) |
| P0 | 缺失依赖修复hutool + spring-retry| pom.xml缺少依赖导致编译失败 | 低 |
---
## 五、风险说明
1. **分区表重建风险**MySQL分区表不支持在线变更主键和分区字段需要在低峰期用`RENAME TABLE`切换。建议提前在测试环境验证。
2. **API对接依赖**当前4个快递公司适配器的`queryTraces()`均未对接真实API轨迹主动拉取功能暂时无效**只能依赖回调推送**。建议尽快与顺丰镖局/圆通开放平台完成商务对接。
3. **告警渠道未配置**`LogisticsExceptionDetectJob`发现异常后,`// TODO: 触发告警`部分需要接入邮件/钉钉/飞书等通知渠道。
---
## 六、文件清单
| 操作 | 文件路径 |
|------|----------|
| 新增 | `services/logistics-service/src/main/resources/db/migration/V2__add_partition_and_indexes.sql` |
| 新增 | `services/logistics-service/src/main/java/com/erp/logistics/job/LogisticsExceptionDetectJob.java` |
| 新增 | `services/logistics-service/src/main/resources/db/migration/V3__register_logistics_tasks.sql` |
| 修改 | `services/logistics-service/src/main/resources/application.yml`追加exception配置节 |
| 修改 | `services/logistics-service/src/main/java/com/erp/logistics/dto/response/TraceResponse.java`(追加增强字段) |
| 修改 | `services/logistics-service/src/main/java/com/erp/logistics/controller/LogisticsController.java`追加timeline接口 |
| 修改 | `services/logistics-service/src/main/java/com/erp/logistics/service/LogisticsService.java`新增getTracesEnhanced方法 |
| 修改 | `services/logistics-service/pom.xml`追加hutool、spring-retry依赖 |
## 七、验证结果
```bash
$ mvn clean compile -pl services/logistics-service -am -DskipTests
...
[INFO] Compiling 27 source files with javac [debug target 17] to target/classes
[INFO] BUILD SUCCESS
```
**编译通过** — 27个源文件全部编译成功含新增的`LogisticsExceptionDetectJob`)。

View File

@ -0,0 +1,311 @@
# 逻辑错误修复报告
**任务编号:** 2.3
**扫描范围:** `/root/.openclaw/workspace/erp-java-backend/services/` 下所有微服务共31个
**检查维度:** 循环依赖、事务边界错误、数据库死锁风险、订单状态机、空指针风险
---
## 一、循环依赖检查
### 检查结果:✅ 已有防护,无需修复
| 服务 | 注入点 | 状态 |
|------|--------|------|
| `dashboard-service/DashboardServiceImpl` | `RedisTemplate` 使用 `@Lazy` 避免循环依赖 | ✅ 已正确使用 |
| `tenant-service/TenantServiceImpl` | `RedisTemplate` 使用 `@Lazy` 避免循环依赖 | ✅ 已正确使用 |
其他服务均通过构造函数注入(`@RequiredArgsConstructor` + `final`),无循环依赖风险。
---
## 二、事务边界错误Feign调用被@Transactional包裹
### ⚠️ 高风险 — 发现问题purchase-service
**问题文件:**
- `purchase-service/.../impl/PurchaseOrderServiceImpl.java`
- `purchase-service/.../impl/PurchaseReturnServiceImpl.java`
- `purchase-service/.../impl/PurchaseInboundServiceImpl.java`
#### 问题 2.1`approve()` 方法 — 本地事务先提交,远程通知后失败
```java
// PurchaseOrderServiceImpl.java:211
@Transactional
public PurchaseOrder approve(Long id, PurchaseOrderDTO.ApproveRequest request, Long operatorId) {
// ...更新本地订单状态...
purchaseOrderMapper.updateById(order); // ✅ 本地已提交
try {
financeClient.createAccountsPayable(...); // ❌ 远程调用在事务内,失败则本地已回滚
} catch (Exception e) {
log.warn("通知财务创建应付账款失败...");
}
return order;
}
```
**风险分析:**
| 场景 | 结果 |
|------|------|
| Feign调用超时/异常被catch | 本地事务正常提交,财务应付账款**未创建** → 数据不一致 |
| Feign调用成功数据库更新失败 | 事务回滚但财务侧已创建AP → 数据不一致 |
| finance-service不可用 | 本地订单已审批AP未创建 |
#### 问题 2.2`cancel()` 方法 — 同样问题
```java
// PurchaseOrderServiceImpl.java:246
@Transactional
public void cancel(Long id, Long operatorId) {
order.setStatus(PurchaseOrderStatus.CANCELLED.getCode());
purchaseOrderMapper.updateById(order); // ✅ 本地已提交
try {
financeClient.cancelAccountsPayable(order.getOrderNo()); // ❌ 失败则AP未取消
} catch (Exception e) {
log.warn("通知财务取消应付账款失败...");
}
}
```
#### 问题 2.3`PurchaseReturnServiceImpl` — 确认退货时调用仓库
```java
// PurchaseReturnServiceImpl.java:137
@Transactional
public void confirmReturn(...) {
// 本地更新退货单状态
purchaseReturnMapper.updateById(ret);
// 调用仓库创建出库单
WarehouseFeignClient.OutboundResponse resp = warehouseClient.createOutbound(...);
// ❌ 如果这里失败,退货单已更新但出库单未创建
}
```
#### 问题 2.4`PurchaseInboundServiceImpl` — 入库时调用仓库
```java
// PurchaseInboundServiceImpl.java:168
WarehouseFeignClient.InboundResponse resp = warehouseClient.createInbound(warehouseRequest, operatorId);
```
#### 问题 2.5FeignClient无Fallback降级
`FinanceFeignClient``WarehouseFeignClient` 均未配置 `fallback``fallbackFactory`,服务不可用时直接抛出异常。
### ✅ 修复方案
**核心原则Feign远程调用必须放在事务之外或者在事务提交之后执行**
#### 方案A推荐使用 `TransactionSynchronizationManager` 在事务提交后执行
```java
@Transactional(rollbackFor = Exception.class)
public PurchaseOrder approve(Long id, PurchaseOrderDTO.ApproveRequest request, Long operatorId) {
PurchaseOrder order = purchaseOrderMapper.selectById(id);
if (order == null) throw new BusinessException("采购订单不存在");
if (!PurchaseOrderStatus.PENDING.getCode().equals(order.getStatus())) {
throw new BusinessException("只有待审批状态可以审批");
}
order.setStatus(PurchaseOrderStatus.APPROVED.getCode());
order.setApproverId(operatorId);
order.setApproveTime(LocalDateTime.now());
order.setUpdateBy(operatorId);
order.setUpdateTime(LocalDateTime.now());
purchaseOrderMapper.updateById(order);
// ✅ 事务提交后再通知财务,避免事务内远程调用失败导致回滚
TransactionSynchronizationManager.registerSynchronization(
new TransactionSynchronization() {
@Override
public void afterCommit() {
try {
financeClient.createAccountsPayable(
FinanceFeignClient.AccountsPayableRequest.builder()
.purchaseOrderId(order.getId())
.purchaseOrderNo(order.getOrderNo())
.supplierId(order.getSupplierId())
.amount(order.getTotalWithTax())
.paymentMethod(order.getPaymentMethod())
.build());
} catch (Exception e) {
log.error("通知财务创建应付账款失败: orderNo={}, error={}",
order.getOrderNo(), e.getMessage());
// 可选:发送告警通知人工介入
}
}
});
return order;
}
```
#### 方案B为FeignClient添加Fallback
```java
@FeignClient(name = "finance-service",
url = "${finance-service.url:}",
fallbackFactory = FinanceFeignClientFallbackFactory.class)
public interface FinanceFeignClient { ... }
```
---
## 三、数据库死锁风险
### ✅ 未发现明显死锁风险
**扫描范围:** 所有 `@Transactional` 方法中的多表操作
| 检查项 | 结果 |
|--------|------|
| 跨表显式 `SELECT ... FOR UPDATE` | 未发现(无悲观锁查询) |
| 多表按不同顺序访问 | 未发现跨服务多表锁场景 |
| `@Async` + `@Transactional` 混用 | 未发现 |
### ⚠️ 轻量级风险(库存并发)
**问题文件:** `inventory-service/.../StockCheckServiceImpl.java`
```java
@Transactional
public Stock lockStock(String skuCode, Long warehouseId, Integer quantity, Long orderId) {
Stock stock = getOrCreateStock(skuCode, warehouseId); // SELECT
if (stock.getAvailableQuantity() < quantity) {
throw new RuntimeException("可用库存不足");
}
stock.setLockedQuantity(stock.getLockedQuantity() + quantity);
stockRepository.updateById(stock); // UPDATE — 无锁
}
```
**风险:** 两个并发请求同时锁定同一SKU第一个请求的更新可能被第二个覆盖check-then-act竞态
**现状:** `Stock.version` 字段存在但**未标注 `@Version`**,乐观锁未生效。
**修复方案:** 添加 `@Version` 注解:
```java
// Stock.java
@Version
private Integer version;
```
MyBatis-Plus 会自动在 UPDATE 时追加 `AND version = ?`,更新失败时抛出 `OptimisticLockingFailureException`,触发事务回滚。
---
## 四、订单状态机检查
### ✅ 状态机设计合理,未发现问题
**已验证文件:**
- `order-service/.../OrderStateMachine.java`
- `aftersale-service/.../AfterSaleStatusMachine.java`
| 验证项 | 结果 |
|--------|------|
| 所有状态转换均通过 `canTransition()` 校验 | ✅ |
| 终态completed/cancelled/closed不可再流转 | ✅ |
| 审核通过后自动设置 `auditStatus=approved` | ✅ |
| `canAudit/canShip/canComplete/canCancel` 边界检查 | ✅ |
**状态流转图Order**
```
pending → auditing → shipped → completed
↓ ↓ ↓
cancelled cancelled cancelled
```
**状态流转图AfterSale**
```
pending → approved → processing → received → completed → closed
↓ ↓
rejected rejected → pending (重新开启)
```
---
## 五、空指针风险NPE
### ⚠️ 发现2处潜在NPE其他已有防护
#### 问题 5.1`PurchaseStatisticsServiceImpl.getSupplierStats()`
**文件:** `purchase-service/.../PurchaseStatisticsServiceImpl.java:127`
```java
// 从Map按supplierId分组理论上不可能为空
List<PurchaseOrder> supplierOrders = entry.getValue();
String supplierName = supplierOrders.get(0).getSupplierName(); // ⚠️ supplierOrders理论上非空
// 但get(0)依赖列表非空
// 且getSupplierName()可能返回null
```
**实际情况:** `entry.getValue()` 来自 `Collectors.groupingBy()`key存在则value必为非空List`getSupplierName()` 可能在数据库中为NULL。
**修复:** 防御性处理
```java
String supplierName = supplierOrders.stream()
.map(PurchaseOrder::getSupplierName)
.filter(Objects::nonNull)
.findFirst()
.orElse("未知供应商");
```
#### 问题 5.2`OrderSettlementServiceImpl` 返回 `.get(0)`
**文件:** `order-service/.../OrderSettlementServiceImpl.java:530`
```java
return stats.getShopStatistics().get(0); // ⚠️ 列表可能为空
```
**修复:**
```java
List<ShopSettlementVO> shopStats = stats.getShopStatistics();
if (shopStats == null || shopStats.isEmpty()) {
return null; // 或返回空VO
}
return shopStats.get(0);
```
### ✅ 已正确防护的位置
| 位置 | 防护方式 |
|------|----------|
| `ai-service/AIChatServiceImpl:210` | `if (choices != null && !choices.isEmpty())` |
| `logistics-service/LogisticsService:56` | `if (traces.isEmpty())` → return null |
| `approval-flow-service/AuditRuleServiceImpl:417` | `if (!ruleActions.isEmpty())` |
| `system-tool-service/DataConsistencyService:127` | 三元运算符 `isEmpty() ? ... : ...` |
---
## 六、汇总
| 类别 | 风险等级 | 发现数量 | 已修复 |
|------|----------|----------|--------|
| 循环依赖 | ✅ 无风险 | 0 | — |
| 事务边界+Feign | 🔴 高风险 | 5处 | 需要修复 |
| 数据库死锁 | 🟡 中风险 | 1处 | 需要修复 |
| 订单状态机 | ✅ 无风险 | 0 | — |
| 空指针风险 | 🟡 中风险 | 2处 | 需要修复 |
---
## 七、修复优先级
| 优先级 | 修复项 | 所属问题 |
|--------|--------|----------|
| P0 | `PurchaseOrderServiceImpl` Feign调用移出事务 | 问题2.1、2.2 |
| P0 | `PurchaseReturnServiceImpl` Feign调用移出事务 | 问题2.3 |
| P0 | `Stock.version` 添加 `@Version` | 问题三 |
| P1 | `PurchaseStatisticsServiceImpl` NPE防护 | 问题5.1 |
| P1 | `OrderSettlementServiceImpl` NPE防护 | 问题5.2 |
| P2 | 为 `FinanceFeignClient` 添加Fallback | 问题2.5 |
| P2 | 为 `WarehouseFeignClient` 添加Fallback | 问题2.5 |

View File

@ -0,0 +1,173 @@
# 重复架构问题清单
> 扫描时间2026-04-05
> 扫描范围:`/root/.openclaw/workspace/erp-java-backend/services/` 下 30 个微服务
> 最后更新2026-04-05修复已完成
---
## ✅ 检查任务1重复工具类
**结论:未发现大规模重复工具类**
- 未发现跨服务重复的 `DateUtils`、`StringUtils`、`JsonUtils` 等通用工具类
- 各服务有少量服务级工具类(如 `CronUtils`、`CsvUtils`、`ExcelUtils`),属于正常范围,无需提取
---
## ✅ 检查任务2重复DTO已修复
### 2.1 ApiResponse — 13个副本 ✅ 已修复
**修复前**13个服务各维护一份 `ApiResponse`
**修复后**
- 所有服务统一使用 `com.erp.common.core.model.ApiResponse`(来自 `common-core` 模块)
- 所有本地 `ApiResponse.java` 文件已删除
- 受影响服务aftersale-service, ai-service, approval-flow-service, inventory-service, logistics-service, order-service, platform-sync-service, print-service, purchase-service, reconciliation-service, supplier-service, warehouse-service, waybill-service
### 2.2 PageResponse — 11个副本 ✅ 已修复
**修复前**11个服务各维护一份 `PageResponse`
**修复后**
- 所有服务统一使用 `com.erp.common.core.model.PageResponse`(来自 `common-core` 模块)
- 所有本地 `PageResponse.java` 文件已删除
- 受影响服务:同上(去掉了 platform-sync-service 和 waybill-service
### 2.3 各服务特有DTO业务相关无需提取
- `StockWarningDTO`、`SettlementDTO`、`PurchaseOrderDTO` 等属于业务DTO不可跨服务提取
---
## ✅ 检查任务3重复配置类已修复
### 3.1 OpenApiConfig — 27个副本 ✅ 已修复
**修复前**27个服务各维护一份几乎完全相同的 `OpenApiConfig`
**修复后**
- 提取到 `common/common-config` 模块的 `com.erp.common.config.OpenApiConfig`
- 所有服务删除本地 `OpenApiConfig.java`
- 通过 `application.yml``erp.openapi.*` 配置项动态设置服务名称和URL
- 所有服务 pom.xml 添加 `common-config` 依赖
### 3.2 JacksonConfig — 3个副本 ✅ 已修复
**修复前**finance-service, approval-flow-service, purchase-service 各有一份相同实现
**修复后**
- 提取到 `common/common-config` 模块的 `com.erp.common.config.JacksonConfig`
- 所有服务删除本地 `JacksonConfig.java`
### 3.3 RedisConfig — 3个副本 ✅ 已统一
**修复前**
| 服务 | 实现 |
|------|------|
| logistics-service | `Jackson2JsonRedisSerializer` |
| scheduled-task-service | `GenericJackson2JsonRedisSerializer` |
| system-tool-service | `StringRedisSerializer` |
**修复后**
- 提取到 `common/common-config` 模块的 `com.erp.common.config.RedisConfig`
- 提供统一的 `RedisTemplate<String, Object>` (Jackson2Json) 和 `StringRedisTemplate`
- 各种序列化方案均可在 common-config 中找到,避免各服务自行实现混乱
### 3.4 MyBatisPlusConfig / MyBatisConfig — 保留
各服务的 MyBatisPlusConfig 包含服务特定的配置(如分页大小),**保留在各自服务**是合理的。
---
## 🔴 检查任务4跨服务共享数据库实体待治理
以下实体存在跨服务共享同一张表但各自维护独立实体的问题,**建议作为下一轮架构优化任务处理**
### 4.1 Stock — 4个实体3种表名
| 服务 | 表名 | 建议操作 |
|------|------|----------|
| inventory-service | `stocks` | 保留 inventory-service 为 Stock 主服务 |
| warehouse-service | `stock` | 改用 Feign 调用 inventory-service |
| dashboard-service | `stocks` | 改用 Feign 调用 inventory-service |
| tenant-service | `stocks` | 多租户版需单独讨论 |
### 4.2 Warehouse — 3个实体2种表名
| 服务 | 表名 | 建议操作 |
|------|------|----------|
| inventory-service | `warehouses` | 保留 inventory-service 为精简版主服务 |
| warehouse-service | `warehouse` | 完整版主服务 |
| dashboard-service | `warehouses` | 改用 Feign 调用 warehouse-service |
### 4.3 Order — 2个实体同表异构
| 服务 | 表名 | 建议操作 |
|------|------|----------|
| order-service | `orders` | **主服务**,保留完整实体 |
| dashboard-service | `orders` | 改用 Feign 调用 order-service |
### 4.4 OrderItem — 2个实体同表异构
| 服务 | 表名 | 建议操作 |
|------|------|----------|
| order-service | `order_items` | **主服务** |
| product-service | `order_items` | 删除本地实体,通过 Feign 调用 order-service |
### 4.5 Supplier — 4个实体2种表名
| 服务 | 表名 | 建议操作 |
|------|------|----------|
| supplier-service | `suppliers` | **主服务**,保留完整实体 |
| product-service | `suppliers` | 删除本地实体,通过 Feign 调用 supplier-service |
| purchase-service | `supplier` | 删除本地实体,通过 Feign 调用 supplier-service |
| tenant-service | `suppliers` | 多租户版需单独讨论 |
### 4.6 ErpSku — 2个实体2种表名
| 服务 | 表名 | 建议操作 |
|------|------|----------|
| product-service | `erp_skus` | **主服务** |
| sku-match-service | `erp_sku` | 删除本地实体,通过 Feign 调用 product-service |
---
## 📋 本轮修复汇总
### 新增模块
| 模块 | 路径 | 内容 |
|------|------|------|
| common-config | `common/common-config/` | OpenApiConfig、JacksonConfig、RedisConfig |
### 修改的服务23个
**已添加 common-config 依赖:**
aftersale-service, ai-service, approval-flow-service, inventory-service, logistics-service,
order-service, platform-sync-service, print-service, purchase-service, reconciliation-service,
supplier-service, warehouse-service, waybill-service, scheduled-task-service, system-tool-service,
dashboard-service, tenant-service, finance-service
**已删除的重复文件:**
- 27个 `OpenApiConfig.java`
- 3个 `JacksonConfig.java`
- 3个 `RedisConfig.java`
- 13个 `ApiResponse.java`
- 11个 `PageResponse.java`
**已更新的 import 语句:**
- 所有服务中的 `com.erp.<service>.dto.ApiResponse``com.erp.common.core.model.ApiResponse`
- 所有服务中的 `com.erp.<service>.dto.PageResponse``com.erp.common.core.model.PageResponse`
---
## 📋 待处理P2级 - 下一轮架构优化)
| 优先级 | 问题 | 说明 |
|--------|------|------|
| P2-高 | Stock 跨服务 | inventory/warehouse/dashboard 需明确主服务,改用 Feign |
| P2-高 | Order/OrderItem 跨服务 | dashboard/product 需改用 Feign 调用 order-service |
| P2-高 | Supplier 跨服务 | product/purchase 需改用 Feign 调用 supplier-service |
| P2-中 | ErpSku 跨服务 | sku-match-service 需改用 Feign 调用 product-service |
| P2-中 | Warehouse 跨服务 | inventory/dashboard 需明确职责边界 |

View File

@ -0,0 +1,777 @@
# ERP Java微服务系统 - 项目开发汇总报告
> 📅 报告生成时间2026年4月5日
> 📌 版本v1.0
---
## 1. 项目概述
| 属性 | 值 |
|------|-----|
| **开发时间** | 2026年4月 |
| **项目规模** | 31个微服务 |
| **代码总量** | 约10.9万行Java代码 |
| **技术栈** | Java 17 + Spring Boot 3.x + Spring Cloud Alibaba 2023.x |
| **文档数量** | 31个README + 部署文档 + 数据库设计文档 |
| **构建工具** | Maven |
| **基础设施** | MySQL 8.0 / Redis 7.x / Nacos 2.x / RocketMQ 5.x / Seata 1.7 / MinIO |
### 核心特性
- ✅ **微服务架构**基于Spring Cloud Alibaba的完整微服务解决方案
- ✅ **多租户支持**:完整的租户隔离体系
- ✅ **分布式事务**Seata AT模式保证数据一致性
- ✅ **消息队列**RocketMQ实现异步解耦
- ✅ **配置中心**Nacos统一管理所有配置
- ✅ **API网关**Spring Cloud Gateway统一入口
- ✅ **监控体系**SkyWalking全链路追踪
- ✅ **对象存储**MinIO私有化文件存储
---
## 2. 微服务清单
| 序号 | 服务名 | 端口 | 容器端口 | 功能描述 | API数量 | 数据库 |
|------|--------|------|----------|----------|---------|--------|
| 1 | gateway | 8080 | 8080 | API网关 - 统一入口,路由转发、鉴权、限流 | - | erp_java |
| 2 | admin-service | 8081 | 8081 | 总控服务 - 租户管理、套餐管理、系统配置 | - | erp_java |
| 3 | user-service | 8082 | 8082 | 用户服务 - 用户管理、认证、权限 | 6 | erp_java |
| 4 | product-service | 8083 | 8083 | 商品服务 - 商品管理、分类、品牌、供应商 | 14 | erp_java |
| 5 | order-service | 8084 | 8082 | 订单服务 - 订单管理、订单结算 | 54 | erp_java |
| 6 | inventory-service | 8085 | 8084 | 库存服务 - 库存管理、盘点、预警 | 25 | erp_java |
| 7 | tenant-service | 8086 | 8083 | 租户服务 - 多租户管理、套餐、权限隔离 | 55 | erp_db |
| 8 | file-service | 8090 | 8082 | 文件服务 - 文件上传、下载、预览、管理 | 13 | erp_db |
| 9 | scheduled-task-service | 8091 | 8088 | 定时任务服务 - 分布式任务调度兼容XXL-Job | 36 | erp_task |
| 10 | aftersale-service | 8087 | 8087 | 售后管理服务 - 退款/退货/换货/维修全流程 | 22 | erp_db |
| 11 | ai-service | 8087 | 8087 | AI助手服务 - 智能对话、模型管理、任务执行 | 17 | erp_db |
| 12 | approval-flow-service | 8086 | 8086 | 审核流服务 - 审核规则、流程、日志 | 15 | erp_db |
| 13 | audit-log-service | 8098 | 8098 | 操作日志审计服务 - 审计日志、敏感操作告警 | 20 | erp_audit |
| 14 | category-service | 8096 | 8085 | 分类管理服务 - 层级分类完整CRUD | 0 | - |
| 15 | customer-service | 8086 | 8086 | 客户关系管理服务 - 客户、跟进、联系人、地址 | 67 | erp_db |
| 16 | dashboard-service | 8086 | 8086 | 仪表盘服务 - 首页数据、统计、预警 | 1 | erp_db |
| 17 | data-import-export-service | 8088 | 8088 | 数据导入导出服务 - Excel/CSV、批量处理 | 17 | erp_java |
| 18 | finance-service | 8007 | 8007 | 财务服务 - 收款/退款、应付/付款、财务报表 | 55 | erp_db |
| 19 | invoice-service | 8086 | 8086 | 发票管理服务 - 发票申请、开具、作废全生命周期 | 16 | erp_db |
| 20 | logistics-service | 8086 | 8086 | 物流轨迹服务 - 轨迹同步、多物流商适配 | 9 | erp_db |
| 21 | notification-service | 8087 | 8087 | 通知服务 - 消息推送、通知模板 | 0 | - |
| 22 | permission-service | 8084 | 8084 | 权限服务 - RBAC角色权限管理 | 10 | erp_db |
| 23 | print-service | 8089 | 8089 | 打印服务 - 打印模板、打印任务、多打印机支持 | 28 | erp_db |
| 24 | purchase-service | 8010 | 8010 | 采购管理服务 - 采购订单、供应商、入库、退货 | 35 | purchase_db |
| 25 | reconciliation-service | 8018 | 8018 | 对账管理服务 - 账单核对、差异处理 | 27 | erp_db |
| 26 | report-service | 8084 | 8084 | 报表统计服务 - 订单/销售/库存/采购/资金统计 | 28 | erp_db |
| 27 | sku-match-service | 8084 | 8084 | SKU匹配服务 - 平台商品管理、规则匹配、AI匹配 | 23 | erp_db |
| 28 | supplier-service | 8086 | 8086 | 供应商管理服务 - 供应商信息、联系人、银行账户、评级 | 34 | erp_db |
| 29 | system-tool-service | 8087 | 8087 | 系统工具服务 - 配置、日志、监控、数据清理 | 35 | erp_db |
| 30 | warehouse-service | 8084 | 8084 | 云仓服务 - 仓库管理、云仓集成、库存同步 | 53 | erp_db |
| 31 | waybill-service | 8086 | 8086 | 运单服务 - 运单管理、电子面单、打印、状态跟踪 | 16 | erp_db |
**API总计**:约 **620+** 个REST接口
---
## 3. 基础设施
### 3.1 服务注册与配置中心 - Nacos
| 属性 | 值 |
|------|-----|
| 版本 | nacos/nacos-server:v2.2.3 |
| 端口 | 8848主控、9848gRPC、9849gRPC |
| 控制台 | http://localhost:8848/nacos |
| 账号 | nacos / nacos |
| 模式 | 单机模式(开发)/ 集群模式(生产) |
**Nacos配置管理**
- 共享配置:`common-config.yaml`、`datasource-config.yaml`、`redis-config.yaml`
- 服务专属配置:每个服务有独立的`{service}-config.yaml`
- 命名空间支持多环境dev/test/prod
### 3.2 消息队列 - RocketMQ
| 属性 | 值 |
|------|-----|
| 版本 | apache/rocketmq:5.1.4 |
| Namesrv端口 | 9876 |
| Broker端口 | 10911 |
| 控制台 | http://localhost:8080 |
| 特色 | 支持事务消息、延迟消息、定时消息 |
**主题设计**
- `erp-order-topic` - 订单事件
- `erp-inventory-topic` - 库存变动
- `erp-notification-topic` - 通知消息
- `erp-payment-topic` - 支付回调
### 3.3 分布式事务 - Seata
| 属性 | 值 |
|------|-----|
| 版本 | seataio/seata-server:1.7.1 |
| TC端口 | 8091 |
| 控制台端口 | 7091 |
| 事务模式 | AT模式 |
| 存储 | DB模式MySQL |
**应用场景**
- 订单创建 + 库存扣减
- 采购入库 + 财务记账
- 多服务数据一致性保障
### 3.4 API网关 - Spring Cloud Gateway
| 属性 | 值 |
|------|-----|
| 端口 | 8080 |
| 路由策略 | Path路径匹配 |
| 鉴权 | JWT Token验证 |
| 限流 | Redis +令牌桶算法 |
| 健康检查 | /actuator/health |
---
## 4. 项目结构
```
erp-java-backend/
├── docker-compose.yml # 开发环境全量Docker配置
├── docker-compose.cluster.yml # 集群模式Docker配置
├── pom.xml # 父级Maven配置
├── README.md # 项目主文档
├── DEVELOPMENT.md # 开发指南
├── project-structure.md # 项目结构说明
├── gateway/ # API网关Spring Cloud Gateway
│ ├── pom.xml
│ ├── README.md
│ ├── src/
│ │ └── main/
│ │ ├── java/com/erp/gateway/
│ │ └── resources/application.yml
│ └── docker/Dockerfile
├── common/ # 公共模块
│ ├── common-core/ # 核心工具类、异常、常量
│ ├── common-web/ # Web响应封装、异常处理
│ ├── common-mybatis/ # MyBatis Plus配置
│ └── common-redis/ # Redis配置模板
├── services/ # 31个业务微服务
│ ├── admin-service/ # 总控服务(租户/套餐)
│ │ ├── src/main/java/com/erp/admin/
│ │ ├── src/main/resources/
│ │ │ ├── bootstrap.yml # Nacos配置引导
│ │ │ └── application.yml
│ │ ├── Dockerfile
│ │ ├── docker-compose.yml
│ │ ├── k8s/deployment.yaml # K8s部署配置
│ │ └── README.md
│ │
│ ├── user-service/ # 用户服务
│ ├── product-service/ # 商品服务
│ ├── order-service/ # 订单服务
│ ├── inventory-service/ # 库存服务
│ ├── finance-service/ # 财务服务
│ ├── tenant-service/ # 租户服务
│ ├── customer-service/ # 客户管理服务
│ ├── supplier-service/ # 供应商服务
│ ├── purchase-service/ # 采购服务
│ ├── warehouse-service/ # 云仓服务
│ ├── logistics-service/ # 物流服务
│ ├── waybill-service/ # 运单服务
│ ├── invoice-service/ # 发票服务
│ ├── aftersale-service/ # 售后管理服务
│ ├── permission-service/ # 权限服务
│ ├── print-service/ # 打印服务
│ ├── file-service/ # 文件服务
│ ├── notification-service/ # 通知服务
│ ├── schedule-task-service/ # 定时任务服务
│ ├── report-service/ # 报表统计服务
│ ├── reconciliation-service/ # 对账服务
│ ├── sku-match-service/ # SKU匹配服务
│ ├── ai-service/ # AI助手服务
│ ├── approval-flow-service/ # 审核流服务
│ ├── audit-log-service/ # 审计日志服务
│ ├── category-service/ # 分类管理服务
│ ├── dashboard-service/ # 仪表盘服务
│ ├── data-import-export-service/ # 数据导入导出服务
│ ├── system-tool-service/ # 系统工具服务
│ └── api-gateway/ # API网关占位
├── infrastructure/ # 基础设施配置
│ ├── kubernetes/ # K8s集群配置
│ │ ├── erp-global-infra.yaml # 全局ConfigMap/Secret/Ingress
│ │ ├── erp-db-init-job.yaml # 数据库初始化Job
│ │ └── kustomization.yaml # Kustomization配置
│ ├── mysql/ # MySQL初始化脚本
│ │ └── init.sql
│ ├── nacos/ # Nacos配置
│ │ ├── docker-compose.standalone.yml
│ │ ├── docker-compose.cluster.yml
│ │ └── examples/
│ │ ├── shared-config/ # 共享配置(公共/数据源/Redis
│ │ └── client-config/ # 客户端引导配置
│ ├── redis/ # Redis配置
│ └── rocketmq/ # RocketMQ配置
├── scripts/ # 部署脚本
│ ├── init-db.sh # 数据库初始化
│ ├── setup-config-center.sh # 配置中心初始化
│ ├── start-dev.sh # 开发环境启动
│ └── update-microservices-config.sh # 更新微服务配置
└── docs/ # 文档目录
├── DEPLOYMENT.md # 部署文档
├── 微服务架构与数据库设计报告.md
├── database-split-design.md # 数据库拆分设计
├── 前后端功能对比分析.md
├── 缺失功能清单.md
├── api-docs/ # API文档
├── database/ # 数据库设计文档
└── deployment/ # 部署相关文档
```
---
## 5. 快速启动
### 5.1 环境要求
| 组件 | 版本要求 |
|------|----------|
| JDK | 17+ |
| Maven | 3.8+ |
| Docker | 20.10+ |
| MySQL | 8.0+ |
| Redis | 7.0+ |
### 5.2 Docker Compose 一键启动
```bash
# 进入项目根目录
cd /root/.openclaw/workspace/erp-java-backend
# 启动所有基础设施服务(推荐首次运行)
docker-compose up -d mysql redis nacos rocketmq seata minio elasticsearch
# 启动API网关
docker-compose up -d gateway
# 按需启动业务服务
docker-compose up -d user-service product-service order-service inventory-service
# 查看服务状态
docker-compose ps
# 查看特定服务日志
docker-compose logs -f user-service
# 停止所有服务
docker-compose down
```
### 5.3 全量启动
```bash
# 构建所有镜像
docker-compose build
# 启动全部服务
docker-compose up -d
# 查看健康状态
docker-compose ps
```
### 5.4 开发环境 IDE 启动
```bash
# 1. 启动基础设施
docker-compose up -d mysql redis nacos
# 2. 在IDE中导入项目根pom.xml
# 3. 启动顺序:
# ① Nacos → ② user-service → ③ product-service → ④ order-service → ...
# 4. 访问Nacos控制台确认服务注册
open http://localhost:8848/nacos
```
### 5.5 基础设施访问地址
| 服务 | 地址 | 账号 |
|------|------|------|
| Nacos控制台 | http://localhost:8848/nacos | nacos / nacos |
| RocketMQ控制台 | http://localhost:8080 | - |
| MinIO控制台 | http://localhost:9001 | minioadmin / minioadmin123 |
| SkyWalking UI | http://localhost:8081 | - |
| Elasticsearch | http://localhost:9200 | - |
---
## 6. API文档
### 6.1 Swagger / Knife4j API文档
各服务均集成 Knife4j增强版Swagger可通过以下地址访问
| 序号 | 服务名 | Swagger地址 | 文档模式 |
|------|--------|-------------|---------|
| 1 | gateway | http://localhost:8080/doc.html | Knife4j |
| 2 | admin-service | http://localhost:8081/doc.html | Knife4j |
| 3 | user-service | http://localhost:8082/doc.html | Knife4j |
| 4 | product-service | http://localhost:8083/doc.html | Knife4j |
| 5 | order-service | http://localhost:8082/doc.html (容器内) | Knife4j |
| 6 | inventory-service | http://localhost:8084/doc.html (容器内) | Knife4j |
| 7 | tenant-service | http://localhost:8083/doc.html (容器内) | Knife4j |
| 8 | finance-service | http://localhost:8007/doc.html | Knife4j |
| 9 | customer-service | http://localhost:8086/doc.html | Knife4j |
| 10 | supplier-service | http://localhost:8086/doc.html | Knife4j |
| 11 | purchase-service | http://localhost:8010/doc.html | Knife4j |
| 12 | warehouse-service | http://localhost:8084/doc.html (容器内) | Knife4j |
| 13 | print-service | http://localhost:8089/doc.html | Knife4j |
| 14 | file-service | http://localhost:8082/doc.html (容器内) | Knife4j |
| 15 | report-service | http://localhost:8084/doc.html (容器内) | Knife4j |
| 16 | aftersale-service | http://localhost:8087/doc.html | Knife4j |
| 17 | ai-service | http://localhost:8087/doc.html | Knife4j |
| 18 | approval-flow-service | http://localhost:8086/doc.html | Knife4j |
| 19 | invoice-service | http://localhost:8086/doc.html | Knife4j |
| 20 | logistics-service | http://localhost:8086/doc.html | Knife4j |
| 21 | sku-match-service | http://localhost:8084/doc.html (容器内) | Knife4j |
| 22 | system-tool-service | http://localhost:8087/doc.html | Knife4j |
> **说明**:外部端口映射方式:`http://{主机IP}:{外部端口}/doc.html`
> 内部端口方式需通过API网关统一暴露或在`docker-compose.yml`中调整端口映射。
### 6.2 网关路由配置
API网关默认路由规则
```
/api/auth/** → erp-auth认证服务
/api/user/** → erp-user用户服务
/api/admin/** → erp-admin总控服务
/api/product/** → erp-product商品服务
/api/order/** → erp-order订单服务
/api/finance/** → erp-finance财务服务
/api/tenant/** → erp-tenant租户服务
```
---
## 7. 数据库设计
### 7.1 数据库总览
| 数据库名 | 服务 | 字符集 | 连接地址 |
|----------|------|--------|----------|
| `erp_java` | user/product/order/inventory/admin等核心服务 | utf8mb4 | localhost:3307 |
| `erp_db` | 多数业务服务(租户隔离) | utf8mb4 | 111.229.80.149:3306 |
| `erp_task` | 定时任务服务 | utf8mb4 | 111.229.80.149:3306 |
| `erp_audit` | 审计日志服务 | utf8mb4 | localhost:3306 |
| `purchase_db` | 采购服务专属 | utf8mb4 | 111.229.80.149:3306 |
### 7.2 核心数据表erp_java
```
erp_java/
├── 用户模块
│ ├── users # 用户表
│ ├── roles # 角色表
│ ├── permissions # 权限表
│ ├── user_roles # 用户角色关联表
│ └── login_logs # 登录日志
├── 商品模块
│ ├── products # 商品表
│ ├── categories # 商品分类
│ ├── brands # 品牌表
│ └── product_skus # SKU表
├── 订单模块
│ ├── orders # 订单主表
│ ├── order_items # 订单明细
│ └── order_operations # 订单操作记录
├── 库存模块
│ ├── inventory # 库存表
│ ├── inventory_logs # 库存变动日志
│ └── warehouses # 仓库表
└── 公共模块
├── tenants # 租户表
├── files # 文件表
└── system_configs # 系统配置
```
### 7.3 数据库连接配置
**开发环境Docker Compose**
```yaml
Host: mysql (容器内) / localhost:3307 (宿主机)
Database: erp_java
Username: erp_user
Password: erp123456
```
**生产环境(示例)**
```yaml
Host: 111.229.80.149
Port: 3306
Database: erp_db
Username: root
Password: nihao588+
```
### 7.4 读写分离与分库分表规划
- **读写分离**:订单、商品等高并发表支持读写分离
- **分库策略**按租户ID分库tenant_id
- **分表策略**订单表按月分表orders_YYYYMM
- **NoSQL补充**Redis缓存热点数据Elasticsearch支持商品搜索
---
## 8. 部署架构
### 8.1 Kubernetes 部署
所有服务均提供K8s部署配置位于各服务的 `k8s/deployment.yaml``kubernetes/deployment.yaml`
**前置要求**
- Kubernetes 1.25+
- kubectl已配置
- Ingress Controllernginx-ingress
- StorageClass持久化存储
**一键部署**
```bash
# 1. 创建命名空间
kubectl apply -f infrastructure/kubernetes/erp-global-infra.yaml
# 2. 使用Kustomization部署
kubectl apply -k infrastructure/kubernetes/
# 3. 部署业务服务
for svc in user product order inventory tenant; do
kubectl apply -f services/${svc}-service/k8s/deployment.yaml
done
# 4. 验证部署
kubectl get pods -n erp-prod
kubectl get svc -n erp-prod
kubectl get ingress -n erp-prod
```
### 8.2 HPA 自动扩缩容
所有服务均配置HPAHorizontal Pod Autoscaler
```yaml
spec:
minReplicas: 3 # 最小3个副本
maxReplicas: 10 # 最大10个副本
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70 # CPU 70%时扩容
```
### 8.3 PDB 滚动更新保护
```bash
# 查看Pod中断预算
kubectl get pdb -n erp-prod
# 确保滚动更新时最少可用副本
```
### 8.4 生产环境 Ingress 配置
| 服务 | Ingress域名 | 路径 |
|------|------------|------|
| gateway | api.erpzbbh.cn | / |
| user-service | user.erpzbbh.cn | / |
| admin-service | admin.erpzbbh.cn | / |
| product-service | product.erpzbbh.cn | / |
| order-service | order.erpzbbh.cn | / |
| inventory-service | inventory.erpzbbh.cn | / |
| tenant-service | tenant.erpzbbh.cn | / |
| file-service | file.erpzbbh.cn | / |
### 8.5 服务健康检查
| 检查项 | 路径 | 期望状态 |
|--------|------|---------|
| 网关健康 | GET /actuator/health | UP |
| Nacos心跳 | HTTP GET nacos:8848/nacos/ | 200 |
| MySQL连接 | mysqladmin ping | pong |
| Redis连接 | redis-cli ping | PONG |
| Seata事务 | GET seata:8091/health | UP |
---
## 9. 开发规范
### 9.1 命名规范
#### 9.1.1 Java类命名
| 类型 | 规范 | 示例 |
|------|------|------|
| Controller | `{Entity}Controller` | `UserController` |
| Service | `{Entity}Service` | `UserService` |
| ServiceImpl | `{Entity}ServiceImpl` | `UserServiceImpl` |
| Mapper/Repository | `{Entity}Mapper` | `UserMapper` |
| Entity | `{Entity}` | `User` |
| DTO | `{Entity}DTO` | `UserDTO` |
| VO | `{Entity}VO` | `UserVO` |
| Config | `{Entity}Config` | `RedisConfig` |
#### 9.1.2 数据库表命名
- 表名:`{模块}_{实体}`,全小写,下划线分隔
- 示例:`sys_user` `ord_order` `fin_invoice`
- 公共前缀:`sys_`(系统)`ord_`(订单)`fin_`(财务)
#### 9.1.3 API路径命名
- 格式:`/api/{module}/{entity}`
- 示例:`GET /api/user/users` `POST /api/order/orders`
- 分页:`GET /api/product/products/page`
### 9.2 Git规范
#### 9.2.1 分支策略
```
main # 生产分支(保护)
├── develop # 开发主分支
│ ├── feature/xxx # 功能分支
│ ├── fix/xxx # 修复分支
│ ├── hotfix/xxx # 热修复分支
│ └── release/xxx # 发布分支
```
#### 9.2.2 Commit规范
```
<type>(<scope>): <subject>
# type: feat | fix | docs | style | refactor | test | chore
# scope: 模块名,如 user | order | product
# subject: 简短描述不超过50字
示例:
feat(user): 添加用户注册短信验证码功能
fix(order): 修复订单支付超时未释放库存问题
docs(product): 更新商品API文档
```
#### 9.2.3 MR/PR流程
1. 从`develop`创建功能分支
2. 完成开发并自测
3. 提交MR/PR到`develop`
4. 至少1人Code Review通过
5. 合并前运行CI流水线
6. 合并后删除源分支
### 9.3 代码规范
#### 9.3.1 常用注解
```java
// 实体类
@Data // Lombok自动生成getter/setter
@TableName("sys_user") // MyBatis Plus表名映射
public class User { }
// Mapper
@Mapper // MyBatis Mapper接口
public interface UserMapper extends BaseMapper<User> { }
// Service
@Service // Spring服务注解
@Slf4j // Lombok日志
public class UserServiceImpl implements UserService { }
// Controller
@RestController // RESTful控制器
@RequestMapping("/api/user/users")
public class UserController { }
```
#### 9.3.2 统一响应格式
```java
// 成功响应
{
"code": 200,
"message": "success",
"data": { ... }
}
// 失败响应
{
"code": 400,
"message": "参数错误",
"data": null
}
```
#### 9.3.3 分布式事务注解
```java
@GlobalTransactional(name = "erp-order-create", rollbackFor = Exception.class)
public void createOrder(OrderDTO orderDTO) {
// 订单创建 + 库存扣减 保证原子性
}
```
### 9.4 Docker镜像规范
```dockerfile
# 多阶段构建示例
FROM maven:3.9-eclipse-temurin-17 AS builder
WORKDIR /build
COPY pom.xml .
RUN mvn dependency:go-offline
COPY src ./src
RUN mvn clean package -DskipTests
FROM eclipse-temurin:17-jre-alpine
WORKDIR /app
COPY --from=builder /build/target/*.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "-Xms256m", "-Xmx512m", "app.jar"]
```
---
## 10. 后续优化建议
### 10.1 高可用与稳定性
| 优先级 | 优化项 | 说明 |
|--------|--------|------|
| 🔴 高 | **服务注册中心集群** | Nacos生产环境切换为集群模式3节点避免单点故障 |
| 🔴 高 | **数据库主从复制** | 搭建MySQL主从复制实现读写分离提高查询性能 |
| 🔴 高 | **Seata Server集群** | Seata TC部署为集群模式保证分布式事务高可用 |
| 🟡 中 | **RocketMQ集群** | 部署多Master多Slave Broker提高消息可靠性 |
| 🟡 中 | **Redis Sentinel/Cluster** | Redis Sentinel或Cluster模式保证缓存高可用 |
| 🟢 低 | **SkyWalking集群** | 扩大SkyWalking存储集群规模支持更大追踪数据量 |
### 10.2 性能优化
| 优先级 | 优化项 | 说明 |
|--------|--------|------|
| 🔴 高 | **热点数据缓存** | 对商品信息、分类、配置等热点数据增加Redis二级缓存 |
| 🔴 高 | **分库分表** | 订单表ord_order按月分表解决单表数据量过大问题 |
| 🟡 中 | **Elasticsearch搜索** | 商品搜索、订单搜索接入ES支持全文检索和分词 |
| 🟡 中 | **接口限流优化** | 网关限流规则细化,按服务+用户维度分别限流 |
| 🟢 低 | **异步化改造** | 非核心流程(日志、通知)全面异步化,提升响应速度 |
### 10.3 安全加固
| 优先级 | 优化项 | 说明 |
|--------|--------|------|
| 🔴 高 | **敏感数据加密** | 用户密码、身份证号、银行卡号等敏感字段AES加密存储 |
| 🔴 高 | **API签名验证** | 对外接口增加RSA签名验签防止请求篡改 |
| 🟡 中 | **操作日志脱敏** | 审计日志对敏感字段(密码、金额)打码处理 |
| 🟡 中 | **接口幂等性** | 所有写接口增加幂等Token防止重复提交 |
| 🟢 低 | **HTTPS强制** | 生产环境所有HTTP流量强制跳转HTTPS |
### 10.4 可观测性增强
| 优先级 | 优化项 | 说明 |
|--------|--------|------|
| 🟡 中 | **统一日志平台** | 接入ELKElasticsearch+Logstash+Kibana统一收集所有服务日志 |
| 🟡 中 | **业务监控告警** | 基于SkyWalking自定义业务指标告警订单量暴跌/库存不足等) |
| 🟡 中 | **链路追踪完善** | 完善跨服务调用链路标记traceId实现全链路可观测 |
| 🟢 低 | **压力测试** | 使用JMeter/wrk对核心接口进行常态化压测 |
### 10.5 灰度发布与持续交付
| 优先级 | 优化项 | 说明 |
|--------|--------|------|
| 🟡 中 | **灰度发布** | 接入Nacos配置实现流量灰度版本权重路由 |
| 🟡 中 | **CI/CD流水线** | 完善Jenkins/GitHub Actions自动化构建-测试-部署流水线 |
| 🟡 中 | **配置热更新** | 所有服务支持Nacos配置热更新无需重启 |
| 🟢 低 | **蓝绿部署** | K8s蓝绿部署方案减少发布停机时间 |
### 10.6 文档与知识沉淀
| 优先级 | 优化项 | 说明 |
|--------|--------|------|
| 🟡 中 | **API文档维护** | 建立API文档更新机制确保文档与代码同步 |
| 🟡 中 | **架构设计文档** | 补充各服务详细设计文档DDD领域建模 |
| 🟢 低 | **运维手册** | 编写生产环境运维手册(常见问题处理预案) |
| 🟢 低 | **新手指南** | 完善本地开发环境搭建文档,降低接入门槛 |
---
## 附录
### A. 快速命令参考
```bash
# 查看所有服务健康状态
docker-compose ps
# 重启指定服务
docker-compose restart user-service
# 查看服务日志(实时)
docker-compose logs -f order-service
# 进入服务容器调试
docker exec -it erp-user-service /bin/sh
# 重新构建单个服务
docker-compose build --no-cache user-service
# 清理所有容器和卷(慎用)
docker-compose down -v
```
### B. 服务依赖关系图
```
客户端请求
[Gateway:8080] ← Nacos(8848) 注册/发现
┌─────────────────────────────────────┐
│ 核心业务服务 │
│ user → product → order → inventory │
│ ↓ ↓ ↓ ↓ │
│ tenant supplier finance warehouse │
└─────────────────────────────────────┘
↓ ↓
[RocketMQ消息队列] [Seata分布式事务]
[file-service] [notification-service]
[MySQL/Redis]
```
### C. 相关文档链接
| 文档 | 路径 |
|------|------|
| 开发指南 | `DEVELOPMENT.md` |
| 部署文档 | `docs/DEPLOYMENT.md` |
| 数据库设计 | `docs/database-split-design.md` |
| 微服务架构报告 | `docs/微服务架构与数据库设计报告.md` |
| 缺失功能清单 | `docs/缺失功能清单.md` |
| 前后端对比 | `docs/前后端功能对比分析.md` |
---
*📝 本报告由系统自动生成,如有疑问请联系开发团队*

249
docs/骨架问题清单.md Normal file
View File

@ -0,0 +1,249 @@
# ERP后端骨架问题清单
> 生成时间: 2026-04-05
> 扫描路径: `/root/.openclaw/workspace/erp-java-backend/services/`
> 扫描范围: 所有 main 目录下的 Java 文件
---
## 📋 问题总览
| 类型 | 问题数 | 已修复数 |
|------|--------|---------|
| 空 catch 块 | 1 | 1 |
| TODO Stub 方法(已修复) | 3 | 3 |
| TODO Stub 方法(需人工跟进) | 3 | 0 |
| 缺失服务Feign Client 无服务端) | 1 | 1 |
| Feign Client 端点不匹配 | 5 | 0 |
| 假 RPC 调用mock 数据) | 0 | - |
| **合计** | **13** | **5** |
---
## 🔴 类型一:空 catch 块
### 1.1 ScheduledTaskService.java — 空异常捕获
**文件**: `scheduled-task-service/src/main/java/com/erp/task/service/ScheduledTaskService.java`
**行号**: ~178
**问题代码**:
```java
try {
status = TaskStatus.valueOf(request.getStatus());
} catch (Exception ignored) {
}
```
**风险**: 静默忽略无效的状态值,用户输入脏数据无法感知
**修复状态**: ✅ 已修复
```java
} catch (Exception e) {
log.warn("解析任务状态失败: status={}, error={}", request.getStatus(), e.getMessage());
}
```
---
## 🟠 类型二TODO Stub 方法(已自动修复)
### 2.1 OrderSettlementServiceImpl — 结算周期查询 Stub
**文件**: `order-service/src/main/java/com/erp/order/settlement/service/impl/OrderSettlementServiceImpl.java`
| 方法 | 行号 | 原代码 | 修复状态 |
|------|------|--------|---------|
| `getSettlementPeriodById` | 244 | `return periodRepository.selectById(id); // TODO``return null;` | ✅ 已修复 |
| `getCurrentOpenPeriod` | 250 | `return periodRepository.findByTypeAndStatus(...); // TODO``return null;` | ✅ 已修复 |
| `getReportById` | 423 | `return reportRepository.selectById(id); // TODO``return null;` | ✅ 已修复 |
| `createSettlementPeriod` | 238 | `// periodRepository.insert(period); // TODO` | ✅ 已修复 |
| `generateReport` | 419 | `// reportRepository.insert(report); // TODO` | ✅ 已修复 |
**修复操作**:
1. 新建 `SettlementPeriodRepository.java` (含 `findByTypeAndStatus`, `findLatestOpenPeriod`)
2. 新建 `SettlementReportRepository.java`
3. 注入两个新 Repository替换所有 `return null;` Stub
---
## 🟡 类型三TODO Stub 方法(需人工跟进)
### 3.1 TenantOrderServiceImpl — 租户订单明细
**文件**: `tenant-service/src/main/java/com/erp/tenant/service/order/impl/OrderServiceImpl.java`
**行号**: 47
**问题代码**:
```java
public OrderDetailResponse getOrderDetail(Long tenantId, Long orderId) {
Order order = orderRepository.findById(orderId, tenantId);
if (order == null) {
return null; // ⚠️ 方法未完成
}
// ⚠️ 后续还有未完成的 DTO 转换逻辑
}
```
**说明**: `OrderDTO` 构建逻辑后续不完整,缺少 items 的完整映射。
**建议**: 补充 `OrderDTO` 的完整字段映射,增加异常情况处理。
---
### 3.2 OrderSettlementServiceImpl — closeSettlementPeriod
**文件**: `order-service/src/main/java/com/erp/order/settlement/service/impl/OrderSettlementServiceImpl.java`
**行号**: 257-261
**问题代码**:
```java
public boolean closeSettlementPeriod(Long periodId) {
// SettlementPeriod period = getSettlementPeriodById(periodId);
// period.setPeriodStatus(SettlementPeriod.STATUS_CLOSED);
// periodRepository.updateById(period);
log.info("关闭结算周期: {}", periodId);
return true; // ⚠️ 虚假成功
}
```
**说明**: 方法永远返回 `true`,但实际没有执行任何数据库更新。
**建议**: 取消注释代码,或在 `SettlementPeriodRepository` 中增加 `updateStatus` 方法。
---
### 3.3 OrderSettlementServiceImpl — confirmReport
**文件**: `order-service/src/main/java/com/erp/order/settlement/service/impl/OrderSettlementServiceImpl.java`
**行号**: 436-442
**问题代码**:
```java
public boolean confirmReport(Long reportId) {
// SettlementReport report = getReportById(reportId);
// report.setReportStatus(SettlementReport.STATUS_CONFIRMED);
// report.setConfirmedAt(LocalDateTime.now());
// reportRepository.updateById(report);
log.info("确认结算报表: {}", reportId);
return true; // ⚠️ 虚假成功
}
```
**说明**: 同上,永远返回成功但实际无操作。
**建议**: 取消注释或补充实际更新逻辑。
---
## 🔴 类型四Feign Client — 缺失服务端实现
### 4.1 PlatformSyncClient — platform-sync-service 不存在
**Feign Client 文件**: `order-service/src/main/java/com/erp/order/client/PlatformSyncClient.java`
**服务名**: `platform-sync-service`
**问题**: `services/` 目录下不存在 `platform-sync-service` 目录
**已调用端点**:
| 方法 | 端点 | 用途 |
|------|------|------|
| `syncOrderToPlatform` | `POST /api/platform/sync/order` | 同步订单到平台 |
| `pullOrdersFromPlatform` | `GET /api/platform/pull/orders` | 从平台拉取订单 |
**修复状态**: ✅ 已创建 stub 服务 `platform-sync-service`
- `PlatformSyncServiceApplication.java` — 启动类
- `PlatformSyncController.java` — REST 控制器(实现 `/api/platform/sync/order``/api/platform/pull/orders`
- `PlatformSyncService.java` + `PlatformSyncServiceImpl.java` — 业务接口和实现
- `PlatformSyncRequest.java` / `PullOrdersResponse.java` — 请求响应 DTO
- `ApiResponse.java` — 统一响应格式
- `application.yml` — 配置文件
**⚠️ 注意**: `PlatformSyncServiceImpl` 中的 `syncOrderToPlatform``pullOrdersFromPlatform` 仍为 TODO 实现,需接入各平台(淘宝/京东/拼多多)真实 API。
---
## 🟡 类型五Feign Client 端点不匹配(需人工确认)
### 5.1 ProductClient — product-service 端点缺失
**Feign Client 文件**: `supplier-service/src/main/java/com/erp/supplier/client/ProductClient.java`
**服务名**: `product-service`
| Feign 调用 | 期望端点 | 服务端实际存在? |
|------------|----------|----------------|
| `GET /api/products/{id}` | 查询商品 | ⚠️ 需确认 `ProductController` 存在 |
| `GET /api/products/{id}/exists` | 检查商品是否存在 | ⚠️ 需确认 |
**建议**: 检查 `product-service` 是否实现对应端点,如未实现需补充。
---
### 5.2 StockClient — stock-service 端点不匹配
**Feign Client 文件**: `order-service/src/main/java/com/erp/order/client/StockClient.java`
**服务名**: `stock-service`(但实际对应 `inventory-service`
| Feign 调用 | 期望端点 | inventory-service 实际端点 |
|------------|----------|--------------------------|
| `POST /api/stock/deduct` | 扣减库存 | ❌ 不存在(`/adjust`, `/inbound`, `/outbound`|
| `POST /api/stock/unlock` | 解锁库存 | ❌ 不存在 |
| `POST /api/stock/lock` | 锁定库存 | ❌ 不存在 |
**说明**: `inventory-service``StockController`,但提供的端点是 `/adjust`, `/inbound`, `/outbound`,与 Feign Client 期望的 `/deduct`, `/unlock`, `/lock` 不匹配。
**建议**: 要么在 `inventory-service` 中新增对应端点,要么修改 Feign Client 指向现有端点。
---
### 5.3 InventoryClient — inventory-service 端点不匹配
**Feign Client 文件**: `aftersale-service/src/main/java/com/erp/aftersale/client/InventoryClient.java`
| Feign 调用 | 期望端点 | inventory-service 实际端点 |
|------------|----------|--------------------------|
| `POST /api/inventory/return-in` | 退货入库 | ⚠️ 需确认 |
| `POST /api/inventory/exchange-out` | 换货出库 | ⚠️ 需确认 |
| `GET /api/inventory/sku/{skuId}` | 查询SKU库存 | ⚠️ 需确认 |
| `POST /api/inventory/release-lock` | 批量扣减库存 | ⚠️ 需确认 |
**建议**: 逐一检查 `inventory-service` 是否有对应端点。
---
## 🟢 类型六:合法 null 返回(无需修复)
以下方法的 `return null` 是合理的防御性编程,无需修改:
| 文件 | 方法 | 说明 |
|------|------|------|
| `AfterSaleServiceImpl.toImagesJson` | `return null;` | images 为空时合理返回 null |
| `AuditLogServiceImpl.toJson` | `return null;` | map 为 null 时合理返回 null |
| `CustomerRelationshipServiceImpl.getParentByCustomerId` | `return null;` | 未查到时返回 null |
| `DashboardServiceImpl.calculateRate` | `return null;` | 分母 ≤ 0 时返回 null |
| `FinanceServiceImpl.parseDate` | `return null;` | 日期格式无效时返回 null |
| `LogisticsService.getWaybillStatus` | `return null;` | 运单不存在时返回 null |
| `AbstractCarrierAdapter.parseTime` | `return null;` | 时间格式无效时返回 null |
| `ReportExportServiceImpl.getExportById` | `return null;` | 记录不存在时返回 null |
| `ReportExportServiceImpl.getDownloadUrl` | `return null;` | 导出未完成时返回 null |
| `TenantOrderServiceImpl.getOrderDetail` | `return null;` | 订单不存在时返回 null |
---
## 📁 新建文件清单
| 文件 | 说明 |
|------|------|
| `order-service/.../repository/SettlementPeriodRepository.java` | 结算周期数据访问层 |
| `order-service/.../repository/SettlementReportRepository.java` | 结算报表数据访问层 |
| `platform-sync-service/pom.xml` | Maven 配置文件 |
| `platform-sync-service/.../PlatformSyncServiceApplication.java` | Spring Boot 启动类 |
| `platform-sync-service/.../PlatformSyncController.java` | REST API 控制器 |
| `platform-sync-service/.../PlatformSyncService.java` | 业务接口 |
| `platform-sync-service/.../PlatformSyncServiceImpl.java` | 业务实现 |
| `platform-sync-service/.../dto/ApiResponse.java` | 统一响应 DTO |
| `platform-sync-service/.../dto/PlatformSyncRequest.java` | 请求 DTO |
| `platform-sync-service/.../dto/PullOrdersResponse.java` | 响应 DTO |
| `platform-sync-service/.../resources/application.yml` | 配置文件 |
---
## ⚠️ 后续需人工跟进事项
1. **product-service 端点**: 确认 `ProductClient``/api/products/{id}``/api/products/{id}/exists` 是否已实现
2. **inventory-service 端点**: 确认 `StockClient``InventoryClient` 的端点是否匹配
3. **TenantOrderServiceImpl.getOrderDetail**: 补充完整的 `OrderDTO` 字段映射
4. **OrderSettlementServiceImpl.closeSettlementPeriod / confirmReport**: 取消注释或补充实际逻辑
5. **platform-sync-service**: 对接各平台(淘宝/京东/拼多多)真实 API 替换 TODO stub

139
gateway/README.md Normal file
View File

@ -0,0 +1,139 @@
# ERP API Gateway
基于Spring Cloud Gateway的统一API网关
## 功能特性
- ✅ **Nacos服务发现** - 自动注册与发现微服务
- ✅ **动态路由** - 支持Nacos配置中心的动态路由更新
- ✅ **JWT认证** - 统一认证鉴权
- ✅ **Sentinel限流熔断** - 保护下游服务
- ✅ **Redis分布式限流** - 基于令牌桶算法
- ✅ **统一日志** - 请求/响应完整日志
- ✅ **跨域处理** - CORS配置
## 项目结构
```
gateway/
├── pom.xml
├── src/main/
│ ├── java/com/erp/gateway/
│ │ ├── GatewayApplication.java
│ │ ├── config/
│ │ │ ├── GatewayConfig.java # 网关配置
│ │ │ ├── CorsConfig.java # 跨域配置
│ │ │ ├── SecurityConfig.java # 安全配置
│ │ │ ├── JwtUtil.java # JWT工具类
│ │ │ └── RateLimitConfig.java # 限流Key配置
│ │ ├── filter/
│ │ │ ├── AuthFilter.java # 认证过滤器
│ │ │ └── LogFilter.java # 日志过滤器
│ │ ├── handler/
│ │ │ └── JsonErrorHandler.java # JSON错误处理
│ │ └── sentinel/
│ │ └── SentinelConfig.java # Sentinel配置
│ └── resources/
│ ├── application.yml # 主配置
│ ├── bootstrap.yml # Nacos引导配置
│ ├── logback-spring.xml # 日志配置
│ └── nacos/
│ └── gateway-config.yml # Nacos中心化配置
└── docker/
├── Dockerfile
└── docker-compose.yml
```
## 快速开始
### 环境要求
- JDK 17+
- Maven 3.8+
- Redis
- Nacos 2.x
- Sentinel Dashboard
### 构建
```bash
mvn clean package -DskipTests
```
### 运行
```bash
java -jar target/gateway-1.0.0.jar
```
### Docker部署
```bash
cd docker
docker-compose up -d
```
## 环境变量
| 变量 | 默认值 | 说明 |
|------|--------|------|
| NACOS_SERVER | localhost:8848 | Nacos地址 |
| NACOS_USERNAME | nacos | Nacos用户名 |
| NACOS_PASSWORD | nacos | Nacos密码 |
| REDIS_HOST | localhost | Redis地址 |
| REDIS_PORT | 6379 | Redis端口 |
| JWT_SECRET | - | JWT密钥(256位) |
| SENTINEL_DASHBOARD | localhost:8080 | Sentinel控制台 |
| GATEWAY_PORT | 8080 | 网关端口 |
## 路由配置
| 服务 | 路径 | 下游服务 |
|------|------|----------|
| 认证服务 | /api/auth/** | erp-auth |
| 用户服务 | /api/user/** | erp-user |
| 订单服务 | /api/order/** | erp-order |
| 商品服务 | /api/product/** | erp-product |
| 仓库服务 | /api/warehouse/** | erp-warehouse |
## API端点
### 认证
- `POST /api/auth/login` - 用户登录
- `POST /api/auth/register` - 用户注册
- `POST /api/auth/refresh` - 刷新Token
### 健康检查
- `GET /actuator/health` - 健康状态
## JWT认证
请求头格式:
```
Authorization: Bearer <token>
```
下游服务可获取的头部:
- `X-User-Name` - 用户名
- `X-User-Roles` - 用户角色(逗号分隔)
## 限流策略
| API | QPS限制 |
|-----|---------|
| /api/auth/** | 200 |
| /api/user/** | 100 |
| /api/order/** | 100 |
| /api/product/** | 100 |
| /api/warehouse/** | 100 |
## 监控
- **Actuator**: `/actuator/gateway`, `/actuator/health`
- **Sentinel**: 连接Sentinel Dashboard查看实时数据
## License
MIT

22
gateway/docker/Dockerfile Normal file
View File

@ -0,0 +1,22 @@
FROM eclipse-temurin:17-jre-alpine
WORKDIR /app
# 创建日志目录
RUN mkdir -p /app/logs
# 复制JAR
COPY target/gateway-1.0.0.jar app.jar
# 环境变量
ENV JAVA_OPTS="-Xms256m -Xmx512m -XX:+UseG1GC"
# 健康检查
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8080/actuator/health || exit 1
# 端口
EXPOSE 8080
# 启动
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -jar app.jar"]

View File

@ -0,0 +1,32 @@
version: '3.8'
services:
gateway:
build:
context: ..
dockerfile: docker/Dockerfile
container_name: erp-gateway
ports:
- "8080:8080"
environment:
- SPRING_PROFILES_ACTIVE=docker
- NACOS_SERVER=192.168.1.100:8848
- NACOS_USERNAME=nacos
- NACOS_PASSWORD=nacos
- REDIS_HOST=192.168.1.100
- REDIS_PORT=6379
- JWT_SECRET=your-256-bit-secret-key-for-jwt-token-generation-erp
- SENTINEL_DASHBOARD=192.168.1.100:8080
networks:
- erp-network
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
networks:
erp-network:
external: true

134
gateway/pom.xml Normal file
View File

@ -0,0 +1,134 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.2.5</version>
<relativePath/>
</parent>
<groupId>com.erp</groupId>
<artifactId>gateway</artifactId>
<version>1.0.0</version>
<name>ERP Gateway</name>
<description>ERP Microservice API Gateway</description>
<properties>
<java.version>17</java.version>
<spring-cloud.version>2023.0.1</spring-cloud.version>
<sentinel.version>1.8.8</sentinel.version>
<nacos.version>2023.0.3.2</nacos.version>
</properties>
<dependencies>
<!-- Spring Cloud Gateway -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>
<!-- Nacos Discovery -->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
<!-- Nacos Config -->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
</dependency>
<!-- Sentinel Gateway -->
<dependency>
<groupId>com.alibaba.csp</groupId>
<artifactId>sentinel-spring-cloud-gateway-adapter</artifactId>
<version>${sentinel.version}</version>
</dependency>
<!-- JWT -->
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-api</artifactId>
<version>0.12.5</version>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-impl</artifactId>
<version>0.12.5</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-jackson</artifactId>
<version>0.12.5</version>
<scope>runtime</scope>
</dependency>
<!-- Redis for Rate Limit -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis-reactive</artifactId>
</dependency>
<!-- Actuator -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<!-- Lombok -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<!-- Test -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-alibaba-dependencies</artifactId>
<version>${nacos.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
</project>

View File

@ -0,0 +1,13 @@
package com.erp.gateway;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
@SpringBootApplication
@EnableDiscoveryClient
public class GatewayApplication {
public static void main(String[] args) {
SpringApplication.run(GatewayApplication.class, args);
}
}

View File

@ -0,0 +1,49 @@
package com.erp.gateway.config;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpMethod;
import org.springframework.http.HttpStatus;
import org.springframework.http.server.reactive.ServerHttpRequest;
import org.springframework.http.server.reactive.ServerHttpResponse;
import org.springframework.web.cors.reactive.CorsUtils;
import org.springframework.web.server.ServerWebExchange;
import org.springframework.web.server.WebFilter;
import org.springframework.web.server.WebFilterChain;
import reactor.core.publisher.Mono;
@Configuration
public class CorsConfig {
private static final String ALLOWED_HEADERS = "Authorization,Content-Type,X-Requested-With,Accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers";
private static final String ALLOWED_METHODS = "GET,POST,PUT,DELETE,OPTIONS,PATCH";
private static final String ALLOWED_ORIGIN = "*";
private static final String MAX_AGE = "3600";
@Bean
public WebFilter corsFilter() {
return (ServerWebExchange exchange, WebFilterChain chain) -> {
ServerHttpRequest request = exchange.getRequest();
if (CorsUtils.isCorsRequest(request)) {
ServerHttpResponse response = exchange.getResponse();
HttpHeaders headers = response.getHeaders();
headers.add(HttpHeaders.ACCESS_CONTROL_ALLOW_ORIGIN, ALLOWED_ORIGIN);
headers.add(HttpHeaders.ACCESS_CONTROL_ALLOW_METHODS, ALLOWED_METHODS);
headers.add(HttpHeaders.ACCESS_CONTROL_ALLOW_HEADERS, ALLOWED_HEADERS);
headers.add(HttpHeaders.ACCESS_CONTROL_EXPOSE_HEADERS, ALLOWED_HEADERS);
headers.add(HttpHeaders.ACCESS_CONTROL_ALLOW_CREDENTIALS, "true");
headers.add(HttpHeaders.ACCESS_CONTROL_MAX_AGE, MAX_AGE);
if (request.getMethod() == HttpMethod.OPTIONS) {
response.setStatusCode(HttpStatus.OK);
return Mono.empty();
}
}
return chain.filter(exchange);
};
}
}

View File

@ -0,0 +1,15 @@
package com.erp.gateway.config;
import org.springframework.cloud.gateway.route.RouteLocator;
import org.springframework.cloud.gateway.route.builder.RouteLocatorBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class GatewayConfig {
@Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder) {
return builder.routes().build();
}
}

View File

@ -0,0 +1,83 @@
package com.erp.gateway.config;
import io.jsonwebtoken.*;
import io.jsonwebtoken.security.Keys;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
import javax.crypto.SecretKey;
import java.nio.charset.StandardCharsets;
import java.util.Date;
import java.util.List;
@Slf4j
@Component
public class JwtUtil {
private final SecretKey key;
private final long expiration;
public JwtUtil(
@Value("${jwt.secret}") String secret,
@Value("${jwt.expiration}") long expiration) {
this.key = Keys.hmacShaKeyFor(secret.getBytes(StandardCharsets.UTF_8));
this.expiration = expiration;
}
public String generateToken(String username, List<String> roles) {
Date now = new Date();
Date expiryDate = new Date(now.getTime() + expiration);
return Jwts.builder()
.subject(username)
.claim("roles", roles)
.issuedAt(now)
.expiration(expiryDate)
.signWith(key)
.compact();
}
public String getUsernameFromToken(String token) {
Claims claims = parseToken(token);
return claims != null ? claims.getSubject() : null;
}
@SuppressWarnings("unchecked")
public List<String> getRolesFromToken(String token) {
Claims claims = parseToken(token);
if (claims != null) {
return claims.get("roles", List.class);
}
return List.of();
}
public boolean validateToken(String token) {
try {
parseToken(token);
return true;
} catch (JwtException | IllegalArgumentException e) {
log.warn("Invalid JWT token: {}", e.getMessage());
return false;
}
}
private Claims parseToken(String token) {
try {
return Jwts.parser()
.verifyWith(key)
.build()
.parseSignedClaims(token)
.getPayload();
} catch (ExpiredJwtException e) {
log.warn("JWT token expired");
throw e;
} catch (MalformedJwtException e) {
log.warn("Invalid JWT token format");
throw e;
} catch (JwtException e) {
log.warn("JWT validation failed: {}", e.getMessage());
throw e;
}
}
}

View File

@ -0,0 +1,67 @@
package com.erp.gateway.config;
import org.springframework.cloud.gateway.filter.ratelimit.KeyResolver;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.HttpHeaders;
import org.springframework.http.server.reactive.ServerHttpRequest;
import org.springframework.web.server.ServerWebExchange;
import reactor.core.publisher.Mono;
import java.util.List;
@Configuration
public class RateLimitConfig {
/**
* 基于IP的限流
*/
@Bean
public KeyResolver ipKeyResolver() {
return exchange -> {
ServerHttpRequest request = exchange.getRequest();
String ip = request.getRemoteAddress() != null
? request.getRemoteAddress().getAddress().getHostAddress()
: "unknown";
return Mono.just(ip);
};
}
/**
* 基于用户名的限流
*/
@Bean
public KeyResolver userKeyResolver() {
return exchange -> {
String username = exchange.getRequest().getHeaders().getFirst("X-User-Name");
return Mono.just(username != null ? username : "anonymous");
};
}
/**
* 基于路径的限流
*/
@Bean
public KeyResolver pathKeyResolver() {
return exchange -> Mono.just(
exchange.getRequest().getURI().getPath()
);
}
/**
* 基于路径+用户的复合限流
*/
@Bean
public KeyResolver compositeKeyResolver() {
return exchange -> {
String userName = exchange.getRequest().getHeaders().getFirst("X-User-Name");
String path = exchange.getRequest().getURI().getPath();
String clientIp = exchange.getRequest().getRemoteAddress() != null
? exchange.getRequest().getRemoteAddress().getAddress().getHostAddress()
: "unknown";
String compositeKey = String.format("%s:%s:%s", clientIp, userName, path);
return Mono.just(compositeKey);
};
}
}

View File

@ -0,0 +1,25 @@
package com.erp.gateway.config;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.reactive.EnableWebFluxSecurity;
import org.springframework.security.config.web.server.ServerHttpSecurity;
import org.springframework.security.web.server.SecurityWebFilterChain;
@Configuration
@EnableWebFluxSecurity
public class SecurityConfig {
@Bean
public SecurityWebFilterChain securityWebFilterChain(ServerHttpSecurity http) {
return http
.csrf(ServerHttpSecurity.CsrfSpec::disable)
.httpBasic(ServerHttpSecurity.HttpBasicSpec::disable)
.formLogin(ServerHttpSecurity.FormLoginSpec::disable)
.authorizeExchange(exchanges -> exchanges
.pathMatchers("/api/auth/**", "/actuator/**").permitAll()
.anyExchange().authenticated()
)
.build();
}
}

View File

@ -0,0 +1,97 @@
package com.erp.gateway.filter;
import com.erp.gateway.config.JwtUtil;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.cloud.gateway.filter.GatewayFilterChain;
import org.springframework.cloud.gateway.filter.GlobalFilter;
import org.springframework.core.Ordered;
import org.springframework.core.io.buffer.DataBuffer;
import org.springframework.http.HttpStatus;
import org.springframework.http.server.reactive.ServerHttpRequest;
import org.springframework.http.server.reactive.ServerHttpResponse;
import org.springframework.stereotype.Component;
import org.springframework.util.StringUtils;
import org.springframework.web.server.ServerWebExchange;
import reactor.core.publisher.Mono;
import java.nio.charset.StandardCharsets;
import java.util.List;
@Slf4j
@Component
@RequiredArgsConstructor
public class AuthFilter implements GlobalFilter, Ordered {
private final JwtUtil jwtUtil;
private static final List<String> WHITE_LIST = List.of(
"/api/auth/login",
"/api/auth/register",
"/api/auth/refresh",
"/actuator/health"
);
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
ServerHttpRequest request = exchange.getRequest();
String path = request.getURI().getPath();
// 白名单直接放行
if (isWhiteList(path)) {
return chain.filter(exchange);
}
// 获取Token
String token = extractToken(request);
if (!StringUtils.hasText(token)) {
return unauthorized(exchange, "Missing authentication token");
}
// 验证Token
if (!jwtUtil.validateToken(token)) {
return unauthorized(exchange, "Invalid or expired token");
}
// 获取用户信息并转发到下游服务
String username = jwtUtil.getUsernameFromToken(token);
List<String> roles = jwtUtil.getRolesFromToken(token);
ServerHttpRequest modifiedRequest = request.mutate()
.header("X-User-Name", username)
.header("X-User-Roles", String.join(",", roles))
.build();
log.debug("Authenticated user: {}, roles: {}", username, roles);
return chain.filter(exchange.mutate().request(modifiedRequest).build());
}
private boolean isWhiteList(String path) {
return WHITE_LIST.stream().anyMatch(path::startsWith);
}
private String extractToken(ServerHttpRequest request) {
String bearerToken = request.getHeaders().getFirst("Authorization");
if (StringUtils.hasText(bearerToken) && bearerToken.startsWith("Bearer ")) {
return bearerToken.substring(7);
}
return null;
}
private Mono<Void> unauthorized(ServerWebExchange exchange, String message) {
ServerHttpResponse response = exchange.getResponse();
response.setStatusCode(HttpStatus.UNAUTHORIZED);
response.getHeaders().add("Content-Type", "application/json;charset=UTF-8");
String body = String.format("{\"code\":401,\"message\":\"%s\",\"data\":null}", message);
DataBuffer buffer = response.bufferFactory().wrap(body.getBytes(StandardCharsets.UTF_8));
return response.writeWith(Mono.just(buffer));
}
@Override
public int getOrder() {
return -100; // 高优先级
}
}

View File

@ -0,0 +1,105 @@
package com.erp.gateway.filter;
import lombok.extern.slf4j.Slf4j;
import org.springframework.cloud.gateway.filter.GatewayFilterChain;
import org.springframework.cloud.gateway.filter.GlobalFilter;
import org.springframework.core.Ordered;
import org.springframework.http.server.reactive.ServerHttpRequest;
import org.springframework.http.server.reactive.ServerHttpResponse;
import org.springframework.stereotype.Component;
import org.springframework.web.server.ServerWebExchange;
import reactor.core.publisher.Mono;
import java.time.Instant;
import java.time.format.DateTimeFormatter;
import java.util.UUID;
@Slf4j
@Component
public class LogFilter implements GlobalFilter, Ordered {
private static final String REQUEST_START_TIME = "requestStartTime";
private static final String REQUEST_ID = "requestId";
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
ServerHttpRequest request = exchange.getRequest();
String requestId = UUID.randomUUID().toString().substring(0, 8);
// 记录请求开始时间
exchange.getAttributes().put(REQUEST_START_TIME, System.currentTimeMillis());
exchange.getAttributes().put(REQUEST_ID, requestId);
// 构建请求日志
String requestLog = buildRequestLog(request, requestId);
log.info(requestLog);
// 记录响应
ServerHttpResponse response = exchange.getResponse();
response.getHeaders().add("X-Request-Id", requestId);
return chain.filter(exchange)
.then(Mono.fromRunnable(() -> {
Long startTime = exchange.getAttribute(REQUEST_START_TIME);
if (startTime != null) {
long duration = System.currentTimeMillis() - startTime;
String responseLog = buildResponseLog(request, response, requestId, duration);
log.info(responseLog);
}
}));
}
private String buildRequestLog(ServerHttpRequest request, String requestId) {
return String.format(
"[%s] %s %s %s | Trace: %s | IP: %s",
requestId,
request.getMethod(),
request.getURI().getPath(),
request.getQueryParams().isEmpty() ? "" : "?" + request.getQueryString(),
getHeaderValue(request, "X-Trace-Id"),
getClientIp(request)
);
}
private String buildResponseLog(ServerHttpRequest request, ServerHttpResponse response,
String requestId, long duration) {
return String.format(
"[%s] %s %s | Status: %s | Duration: %dms",
requestId,
request.getMethod(),
request.getURI().getPath(),
response.getStatusCode(),
duration
);
}
private String getHeaderValue(ServerHttpRequest request, String header) {
String value = request.getHeaders().getFirst(header);
return value != null ? value : "-";
}
private String getClientIp(ServerHttpRequest request) {
String[] headers = {
"X-Forwarded-For",
"X-Real-IP",
"Proxy-Client-IP",
"WL-Proxy-Client-IP"
};
for (String header : headers) {
String ip = request.getHeaders().getFirst(header);
if (StringUtils.hasText(ip) && !"unknown".equalsIgnoreCase(ip)) {
return ip.split(",")[0].trim();
}
}
return request.getRemoteAddress() != null
? request.getRemoteAddress().getAddress().getHostAddress()
: "unknown";
}
@Override
public int getOrder() {
return -90;
}
}

View File

@ -0,0 +1,195 @@
package com.erp.gateway.filter;
import io.micrometer.core.instrument.Counter;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Timer;
import lombok.extern.slf4j.Slf4j;
import org.springframework.cloud.gateway.filter.GatewayFilterChain;
import org.springframework.cloud.gateway.filter.GlobalFilter;
import org.springframework.core.Ordered;
import org.springframework.http.server.reactive.ServerHttpRequest;
import org.springframework.http.server.reactive.ServerHttpResponse;
import org.springframework.stereotype.Component;
import org.springframework.web.server.ServerWebExchange;
import reactor.core.publisher.Mono;
import java.time.Duration;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
/**
* 网关API调用量统计过滤器
*
* 使用Micrometer计数器记录每个API的调用量
* - 总量计数api.requests.total
* - 按状态码计数api.requests.{status}
* - 按服务计数api.requests.{service}.total
* - 响应时间统计api.latency
*
* 暴露指标通过 /actuator/prometheus /actuator/metrics 访问
*/
@Slf4j
@Component
public class MetricsFilter implements GlobalFilter, Ordered {
private final MeterRegistry meterRegistry;
/** 按路径+方法的请求计数器使用ConcurrentHashMap避免内存泄漏 */
private final ConcurrentHashMap<String, Counter> pathCounters = new ConcurrentHashMap<>();
/** 按状态码的计数器 */
private final ConcurrentHashMap<String, Counter> statusCounters = new ConcurrentHashMap<>();
/** 按服务的计数器 */
private final ConcurrentHashMap<String, Counter> serviceCounters = new ConcurrentHashMap<>();
/** 全局请求计时器 */
private final Timer globalTimer;
public MetricsFilter(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
// 全局请求计数器和延迟计时器
Counter.builder("api.requests.total")
.description("API请求总数")
.register(meterRegistry);
this.globalTimer = Timer.builder("api.latency")
.description("API响应延迟")
.publishPercentiles(0.5, 0.95, 0.99)
.register(meterRegistry);
log.info("[MetricsFilter] Micrometer指标收集器初始化完成");
}
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
ServerHttpRequest request = exchange.getRequest();
String path = request.getURI().getPath();
String method = request.getMethod().name();
String service = resolveServiceName(request);
// 跳过actuator和健康检查路径
if (isSkippablePath(path)) {
return chain.filter(exchange);
}
long startTime = System.nanoTime();
return chain.filter(exchange)
.doFinally(signalType -> {
long duration = System.nanoTime() - startTime;
int statusCode = getStatusCode(exchange);
String statusKey = String.valueOf(statusCode);
String serviceKey = service.toLowerCase();
// 1. 记录全局请求计数
Counter.builder("api.requests.total")
.description("API请求总数")
.tag("method", method)
.register(meterRegistry)
.increment();
// 2. 记录按状态码的计数
statusCounters.computeIfAbsent(statusKey, k ->
Counter.builder("api.requests.status")
.description("按状态码统计的请求数")
.tag("status", statusKey)
.register(meterRegistry)
).increment();
// 3. 记录按服务的计数
serviceCounters.computeIfAbsent(serviceKey, k ->
Counter.builder("api.requests.service")
.description("按目标服务统计的请求数")
.tag("service", serviceKey)
.register(meterRegistry)
).increment();
// 4. 记录路径级计数带方法
String pathKey = method + ":" + path;
pathCounters.computeIfAbsent(pathKey, k ->
Counter.builder("api.requests.path")
.description("按路径统计的请求数")
.tag("path", truncatePath(path, 50))
.tag("method", method)
.register(meterRegistry)
).increment();
// 5. 记录延迟
globalTimer.record(duration, TimeUnit.NANOSECONDS);
Timer.builder("api.latency.path")
.description("按路径统计的响应延迟")
.tag("path", truncatePath(path, 50))
.tag("method", method)
.publishPercentiles(0.5, 0.95, 0.99)
.register(meterRegistry)
.record(duration, TimeUnit.NANOSECONDS);
log.debug("[Metrics] {} {} -> {} ({}ms) [{}]",
method, path, service,
TimeUnit.NANOSECONDS.toMillis(duration),
statusCode);
});
}
/**
* 从请求中解析目标服务名
*/
private String resolveServiceName(ServerHttpRequest request) {
String path = request.getURI().getPath();
// 从路径中提取服务名 /order-service/api/v1/orders -> order-service
if (path.startsWith("/")) {
String[] segments = path.substring(1).split("/");
if (segments.length > 0) {
String firstSegment = segments[0];
// 去掉可能的版本前缀如 api/v1
if (firstSegment.equals("api") && segments.length > 2) {
return segments[1];
}
return firstSegment;
}
}
return "unknown";
}
/**
* 获取HTTP状态码
*/
private int getStatusCode(ServerWebExchange exchange) {
ServerHttpResponse response = exchange.getResponse();
if (response.getStatusCode() != null) {
return response.getStatusCode().value();
}
return 0;
}
/**
* 跳过不需要统计的路径
*/
private boolean isSkippablePath(String path) {
return path.startsWith("/actuator") ||
path.startsWith("/swagger") ||
path.startsWith("/v3/api-docs") ||
path.equals("/health") ||
path.equals("/favicon.ico");
}
/**
* 截断过长的路径标签Micrometer对高基数标签有限制
*/
private String truncatePath(String path, int maxLen) {
if (path.length() <= maxLen) {
return path;
}
// 保留前半部分 + 省略号
return path.substring(0, maxLen - 3) + "..";
}
@Override
public int getOrder() {
return -80; // 在日志过滤器之后执行
}
}

View File

@ -0,0 +1,66 @@
package com.erp.gateway.handler;
import com.fasterxml.jackson.databind.ObjectMapper;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.boot.web.reactive.error.ErrorWebExceptionHandler;
import org.springframework.core.annotation.Order;
import org.springframework.core.io.buffer.DataBuffer;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.http.server.reactive.ServerHttpResponse;
import org.springframework.stereotype.Component;
import org.springframework.web.server.ResponseStatusException;
import org.springframework.web.server.ServerWebExchange;
import reactor.core.publisher.Mono;
import java.util.HashMap;
import java.util.Map;
@Slf4j
@Component
@Order(-1)
@RequiredArgsConstructor
public class JsonErrorHandler implements ErrorWebExceptionHandler {
private final ObjectMapper objectMapper;
@Override
public Mono<Void> handle(ServerWebExchange exchange, Throwable ex) {
ServerHttpResponse response = exchange.getResponse();
if (response.isCommitted()) {
return Mono.error(ex);
}
response.getHeaders().setContentType(MediaType.APPLICATION_JSON);
HttpStatus status = HttpStatus.INTERNAL_SERVER_ERROR;
String message = "Internal server error";
if (ex instanceof ResponseStatusException responseStatusException) {
status = HttpStatus.valueOf(responseStatusException.getStatusCode().value());
message = responseStatusException.getReason();
} else if (ex instanceof IllegalArgumentException) {
status = HttpStatus.BAD_REQUEST;
message = ex.getMessage();
}
response.setStatusCode(status);
Map<String, Object> errorBody = new HashMap<>();
errorBody.put("code", status.value());
errorBody.put("message", message);
errorBody.put("data", null);
errorBody.put("timestamp", System.currentTimeMillis());
try {
byte[] bytes = objectMapper.writeValueAsBytes(errorBody);
DataBuffer buffer = response.bufferFactory().wrap(bytes);
return response.writeWith(Mono.just(buffer));
} catch (Exception e) {
log.error("Error writing error response", e);
return response.setComplete();
}
}
}

View File

@ -0,0 +1,118 @@
package com.erp.gateway.sentinel;
import com.alibaba.csp.sentinel.adapter.gateway.common.SentinelGatewayConstants;
import com.alibaba.csp.sentinel.adapter.gateway.common.api.ApiDefinition;
import com.alibaba.csp.sentinel.adapter.gateway.common.api.ApiPathPredicateItem;
import com.alibaba.csp.sentinel.adapter.gateway.common.api.ApiPredicateItem;
import com.alibaba.csp.sentinel.adapter.gateway.common.api.GatewayApiDefinitionManager;
import com.alibaba.csp.sentinel.adapter.gateway.sc.SentinelGatewayFilter;
import com.alibaba.csp.sentinel.adapter.gateway.sc.callback.GatewayCallbackManager;
import com.alibaba.csp.sentinel.adapter.gateway.sc.exception.SentinelGatewayBlockExceptionHandler;
import com.alibaba.csp.sentinel.datasource.apollo.ApolloDataSource;
import com.fasterxml.jackson.databind.ObjectMapper;
import jakarta.annotation.PostConstruct;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cloud.gateway.filter.GatewayFilter;
import org.springframework.cloud.gateway.filter.GlobalFilter;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.Ordered;
import org.springframework.core.io.buffer.DataBuffer;
import org.springframework.http.HttpStatus;
import org.springframework.http.server.reactive.ServerHttpResponse;
import org.springframework.web.server.ServerWebExchange;
import org.springframework.web.server.WebExceptionHandler;
import reactor.core.publisher.Mono;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
@Slf4j
@Configuration
public class SentinelConfig {
@Autowired
private ObjectMapper objectMapper;
@Value("${spring.cloud.nacos.discovery.server-addr:localhost:8848}")
private String nacosServer;
/**
* Sentinel Gateway Filter
*/
@Bean
public GatewayFilter sentinelGatewayFilter() {
return new SentinelGatewayFilter();
}
/**
* Sentinel Block Exception Handler
*/
@Bean
public WebExceptionHandler sentinelGatewayBlockExceptionHandler() {
return (ServerWebExchange exchange, Throwable ex) -> {
ServerHttpResponse response = exchange.getResponse();
response.setStatusCode(HttpStatus.TOO_MANY_REQUESTS);
response.getHeaders().add("Content-Type", "application/json;charset=UTF-8");
Map<String, Object> result = new HashMap<>();
result.put("code", 429);
result.put("message", "请求过于频繁,请稍后重试");
result.put("data", null);
try {
byte[] bytes = objectMapper.writeValueAsBytes(result);
DataBuffer buffer = response.bufferFactory().wrap(bytes);
return response.writeWith(Mono.just(buffer));
} catch (Exception e) {
log.error("Sentinel block handler error", e);
return response.setComplete();
}
};
}
/**
* Initialize Sentinel API Definitions
*/
@PostConstruct
public void initSentinelApiDefinitions() {
Set<ApiDefinition> definitions = new HashSet<>();
// Auth API
definitions.add(createApiDefinition("auth_api", "/api/auth/**"));
// User API
definitions.add(createApiDefinition("user_api", "/api/user/**"));
// Order API
definitions.add(createApiDefinition("order_api", "/api/order/**"));
// Product API
definitions.add(createApiDefinition("product_api", "/api/product/**"));
// Warehouse API
definitions.add(createApiDefinition("warehouse_api", "/api/warehouse/**"));
GatewayApiDefinitionManager.loadApiDefinitions(definitions);
log.info("Sentinel API definitions initialized");
}
private ApiDefinition createApiDefinition(String apiName, String pathPattern) {
ApiDefinition definition = new ApiDefinition(apiName);
ApiPathPredicateItem predicateItem = new ApiPathPredicateItem();
predicateItem.setPattern(pathPattern);
predicateItem.setMatchStrategy(SentinelGatewayConstants.URL_MATCH_STRATEGY_PREFIX);
Set<ApiPredicateItem> items = new HashSet<>();
items.add(predicateItem);
definition.setPredicateItems(items);
return definition;
}
}

View File

@ -0,0 +1,30 @@
spring:
cloud:
nacos:
server-addr: ${NACOS_SERVER}
username: ${NACOS_USERNAME}
password: ${NACOS_PASSWORD}
discovery:
enabled: true
register-enabled: true
gateway:
discovery:
locator:
enabled: true
lower-case-service-id: true
redis:
host: ${REDIS_HOST}
port: ${REDIS_PORT}
password: ${REDIS_PASSWORD:}
logging:
level:
root: INFO
com.erp.gateway: DEBUG
management:
endpoints:
web:
exposure:
include: "*"

View File

@ -0,0 +1,107 @@
server:
port: ${GATEWAY_PORT:8080}
spring:
cloud:
gateway:
discovery:
locator:
enabled: true
lower-case-service-id: true
routes:
# 认证服务
- id: auth-service
uri: lb://erp-auth
predicates:
- Path=/api/auth/**
filters:
- StripPrefix=1
- name: RequestRateLimiter
args:
redis-rate-limiter.replenishRate: 100
redis-rate-limiter.burstCapacity: 200
# 用户服务
- id: user-service
uri: lb://erp-user
predicates:
- Path=/api/user/**
filters:
- StripPrefix=1
# 订单服务
- id: order-service
uri: lb://erp-order
predicates:
- Path=/api/order/**
filters:
- StripPrefix=1
# 商品服务
- id: product-service
uri: lb://erp-product
predicates:
- Path=/api/product/**
filters:
- StripPrefix=1
# 仓库服务
- id: warehouse-service
uri: lb://erp-warehouse
predicates:
- Path=/api/warehouse/**
filters:
- StripPrefix=1
default-filters:
- DedupeResponseHeader=Vary Access-Control-Allow-Origin Access-Control-Allow-Credentials, RETAIN_FIRST
redis:
host: ${REDIS_HOST:localhost}
port: ${REDIS_PORT:6379}
password: ${REDIS_PASSWORD:}
database: 0
timeout: 3000ms
lettuce:
pool:
max-active: 8
max-idle: 8
min-idle: 0
max-wait: -1ms
jackson:
date-format: yyyy-MM-dd HH:mm:ss
time-zone: GMT+8
# JWT配置
jwt:
secret: ${JWT_SECRET:your-256-bit-secret-key-for-jwt-token-generation}
expiration: 86400000 # 24小时
header: Authorization
prefix: Bearer
# Sentinel配置
sentinel:
eager: true
dashboard: ${SENTINEL_DASHBOARD:localhost:8080}
# 限流规则
gateway:
rate-limit:
enabled: true
default-replenish-rate: 100
default-burst-capacity: 200
# 日志配置
logging:
level:
root: INFO
com.erp.gateway: DEBUG
org.springframework.cloud.gateway: DEBUG
pattern:
console: "%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n"
# Actuator
management:
endpoints:
web:
exposure:
include: health,info,gateway
endpoint:
health:
show-details: always

View File

@ -0,0 +1,15 @@
spring:
application:
name: erp-gateway
cloud:
nacos:
server-addr: ${NACOS_SERVER:localhost:8848}
username: ${NACOS_USERNAME:nacos}
password: ${NACOS_PASSWORD:nacos}
config:
namespace: ${NACOS_NAMESPACE:public}
group: DEFAULT_GROUP
file-extension: yml
discovery:
namespace: ${NACOS_NAMESPACE:public}
group: DEFAULT_GROUP

View File

@ -0,0 +1,61 @@
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<springProperty scope="context" name="APP_NAME" source="spring.application.name" defaultValue="gateway"/>
<!-- Console Appender -->
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
<charset>UTF-8</charset>
</encoder>
</appender>
<!-- JSON Log Appender for ELK -->
<appender name="JSON" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/${APP_NAME}.json</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/${APP_NAME}.%d{yyyy-MM-dd}.json</fileNamePattern>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="ch.qos.logback.contrib.json.classic.JsonLayout">
<timestampFormat>yyyy-MM-dd'T'HH:mm:ss.SSSZ</timestampFormat>
<appendLineSeparator>true</appendLineSeparator>
<jsonFormatter class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
<prettyPrint>false</prettyPrint>
</jsonFormatter>
</layout>
</encoder>
</appender>
<!-- Async Appender -->
<appender name="ASYNC_JSON" class="ch.qos.logback.classic.AsyncAppender">
<discardingThreshold>0</discardingThreshold>
<queueSize>512</queueSize>
<appender-ref ref="JSON"/>
</appender>
<!-- Gateway Request Log -->
<appender name="GATEWAY_LOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/gateway-request.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/gateway-request.%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %msg%n</pattern>
<charset>UTF-8</charset>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="ASYNC_JSON"/>
</root>
<logger name="com.erp.gateway" level="DEBUG"/>
<logger name="org.springframework.cloud.gateway" level="DEBUG"/>
<logger name="reactor.netty" level="INFO"/>
</configuration>

View File

@ -0,0 +1,72 @@
spring:
cloud:
gateway:
discovery:
locator:
enabled: true
lower-case-service-id: true
routes:
- id: auth-service
uri: lb://erp-auth
predicates:
- Path=/api/auth/**
filters:
- StripPrefix=1
- id: user-service
uri: lb://erp-user
predicates:
- Path=/api/user/**
filters:
- StripPrefix=1
- id: order-service
uri: lb://erp-order
predicates:
- Path=/api/order/**
filters:
- StripPrefix=1
- id: product-service
uri: lb://erp-product
predicates:
- Path=/api/product/**
filters:
- StripPrefix=1
- id: warehouse-service
uri: lb://erp-warehouse
predicates:
- Path=/api/warehouse/**
filters:
- StripPrefix=1
default-filters:
- DedupeResponseHeader=Vary Access-Control-Allow-Origin Access-Control-Allow-Credentials, RETAIN_FIRST
# Sentinel Flow Rules
sentinel:
gateway:
flow-rules:
- resource: /api/auth/**
count: 200
grade: 1
strategy: 0
- resource: /api/user/**
count: 100
grade: 1
strategy: 0
- resource: /api/order/**
count: 100
grade: 1
strategy: 0
- resource: /api/product/**
count: 100
grade: 1
strategy: 0
- resource: /api/warehouse/**
count: 100
grade: 1
strategy: 0
# Redis Rate Limit Config
spring:
redis:
rate-limiter:
replenish-rate: 100
burst-capacity: 200

View File

@ -0,0 +1,314 @@
apiVersion: v1
kind: Namespace
metadata:
name: erp-prod
---
# ============================================================
# 数据库初始化Job - 用于启动时执行数据库迁移
# ============================================================
apiVersion: batch/v1
kind: Job
metadata:
name: erp-db-init
namespace: erp-prod
labels:
app: erp-db-init
tier: database
annotations:
description: "ERP数据库初始化Job在应用启动前执行"
spec:
ttlSecondsAfterFinished: 300
backoffLimit: 3
template:
metadata:
labels:
app: erp-db-init
tier: database
spec:
restartPolicy: OnFailure
initContainers:
# 等待MySQL就绪
- name: wait-for-mysql
image: busybox:1.36
command:
- sh
- -c
- |
echo "等待MySQL服务..."
until nc -z mysql 3306; do
echo "MySQL未就绪等待中..."
sleep 5
done
echo "MySQL已就绪"
containers:
- name: erp-db-init
image: mysql:8.0
command:
- sh
- -c
- |
echo "开始初始化数据库..."
mysql -h mysql -P 3306 -uroot -p"${MYSQL_ROOT_PASSWORD}" <<'EOSQL'
CREATE DATABASE IF NOT EXISTS erp_java CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE IF NOT EXISTS nacos_config CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE IF NOT EXISTS seata CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER IF NOT EXISTS 'erp_user'@'%' IDENTIFIED BY 'erp123456';
CREATE USER IF NOT EXISTS 'erp_user'@'localhost' IDENTIFIED BY 'erp123456';
GRANT ALL PRIVILEGES ON erp_java.* TO 'erp_user'@'%';
GRANT ALL PRIVILEGES ON erp_java.* TO 'erp_user'@'localhost';
GRANT ALL PRIVILEGES ON nacos_config.* TO 'erp_user'@'%';
GRANT ALL PRIVILEGES ON seata.* TO 'erp_user'@'%';
FLUSH PRIVILEGES;
USE erp_java;
-- 创建版本跟踪表
CREATE TABLE IF NOT EXISTS schema_version (
version_rank INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
installed_rank INT NOT NULL,
version VARCHAR(50) NOT NULL,
description VARCHAR(200) NOT NULL,
type VARCHAR(20) NOT NULL DEFAULT 'SQL',
script VARCHAR(1000) NOT NULL,
checksum INT,
installed_by VARCHAR(100) NOT NULL DEFAULT 'system',
installed_on DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
execution_time INT NOT NULL,
success TINYINT NOT NULL DEFAULT 1,
UNIQUE KEY unique_ver_idx (version),
KEY installed_rank_idx (installed_rank),
KEY success_idx (success),
KEY version_rank_idx (version_rank)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
CREATE TABLE IF NOT EXISTS users (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
tenant_id BIGINT NOT NULL DEFAULT 0,
username VARCHAR(50) NOT NULL,
email VARCHAR(100) UNIQUE,
phone VARCHAR(20) UNIQUE,
password_hash VARCHAR(255) NOT NULL,
real_name VARCHAR(50),
avatar VARCHAR(255),
status TINYINT NOT NULL DEFAULT 1,
is_super_admin TINYINT NOT NULL DEFAULT 0,
last_login_at DATETIME,
last_login_ip VARCHAR(45),
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
deleted_at DATETIME,
INDEX idx_tenant_id (tenant_id),
INDEX idx_username (username)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
CREATE TABLE IF NOT EXISTS tenants (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
company_name VARCHAR(200) NOT NULL,
contact_name VARCHAR(50) NOT NULL,
contact_phone VARCHAR(20) NOT NULL,
contact_email VARCHAR(100) NOT NULL,
domain VARCHAR(100) UNIQUE,
status TINYINT NOT NULL DEFAULT 1,
max_users INT DEFAULT 10,
features JSON,
settings JSON,
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_company_name (company_name)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
CREATE TABLE IF NOT EXISTS products (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
tenant_id BIGINT NOT NULL DEFAULT 0,
product_code VARCHAR(50) NOT NULL,
product_name VARCHAR(200) NOT NULL,
category_id BIGINT,
unit VARCHAR(20),
price DECIMAL(12,2),
cost DECIMAL(12,2),
stock INT DEFAULT 0,
min_stock INT DEFAULT 0,
status TINYINT NOT NULL DEFAULT 1,
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
deleted_at DATETIME,
UNIQUE KEY uk_code_tenant (product_code, tenant_id),
INDEX idx_category (category_id),
INDEX idx_tenant_id (tenant_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
CREATE TABLE IF NOT EXISTS categories (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
tenant_id BIGINT NOT NULL DEFAULT 0,
parent_id BIGINT DEFAULT 0,
category_name VARCHAR(100) NOT NULL,
sort_order INT DEFAULT 0,
status TINYINT NOT NULL DEFAULT 1,
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_tenant_parent (tenant_id, parent_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
CREATE TABLE IF NOT EXISTS orders (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
order_no VARCHAR(50) NOT NULL UNIQUE,
tenant_id BIGINT NOT NULL DEFAULT 0,
customer_id BIGINT NOT NULL,
total_amount DECIMAL(12,2) NOT NULL DEFAULT 0,
discount_amount DECIMAL(12,2) NOT NULL DEFAULT 0,
payable_amount DECIMAL(12,2) NOT NULL DEFAULT 0,
paid_amount DECIMAL(12,2) NOT NULL DEFAULT 0,
status VARCHAR(20) NOT NULL DEFAULT 'PENDING',
order_date DATE NOT NULL,
delivery_date DATE,
remark TEXT,
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
deleted_at DATETIME,
INDEX idx_tenant_id (tenant_id),
INDEX idx_customer_id (customer_id),
INDEX idx_order_no (order_no),
INDEX idx_order_date (order_date),
INDEX idx_status (status)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
CREATE TABLE IF NOT EXISTS order_items (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
order_id BIGINT NOT NULL,
tenant_id BIGINT NOT NULL DEFAULT 0,
product_id BIGINT NOT NULL,
product_code VARCHAR(50) NOT NULL,
product_name VARCHAR(200) NOT NULL,
unit VARCHAR(20),
quantity INT NOT NULL DEFAULT 1,
unit_price DECIMAL(12,2) NOT NULL,
discount_rate DECIMAL(5,2) DEFAULT 100.00,
amount DECIMAL(12,2) NOT NULL,
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
INDEX idx_order_id (order_id),
INDEX idx_tenant_id (tenant_id),
INDEX idx_product_id (product_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
CREATE TABLE IF NOT EXISTS inventory_logs (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
tenant_id BIGINT NOT NULL DEFAULT 0,
product_id BIGINT NOT NULL,
warehouse_id BIGINT,
order_no VARCHAR(50),
in_out_type VARCHAR(10) NOT NULL COMMENT 'IN/OUT/ADJUST',
quantity INT NOT NULL,
before_stock INT NOT NULL DEFAULT 0,
after_stock INT NOT NULL DEFAULT 0,
operator_id BIGINT,
remark VARCHAR(500),
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
INDEX idx_tenant_product (tenant_id, product_id),
INDEX idx_created_at (created_at),
INDEX idx_order_no (order_no)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
CREATE TABLE IF NOT EXISTS warehouses (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
tenant_id BIGINT NOT NULL DEFAULT 0,
warehouse_code VARCHAR(50) NOT NULL,
warehouse_name VARCHAR(100) NOT NULL,
address VARCHAR(255),
contact_name VARCHAR(50),
contact_phone VARCHAR(20),
is_default TINYINT NOT NULL DEFAULT 0,
status TINYINT NOT NULL DEFAULT 1,
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
deleted_at DATETIME,
UNIQUE KEY uk_code_tenant (warehouse_code, tenant_id),
INDEX idx_tenant_id (tenant_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
CREATE TABLE IF NOT EXISTS audit_logs (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
tenant_id BIGINT NOT NULL DEFAULT 0,
user_id BIGINT,
username VARCHAR(50),
module VARCHAR(50) NOT NULL,
operation VARCHAR(100) NOT NULL,
method VARCHAR(200),
request_url VARCHAR(500),
request_method VARCHAR(10),
request_params TEXT,
request_body TEXT,
response_body TEXT,
ip_address VARCHAR(45),
user_agent VARCHAR(500),
execution_time INT NOT NULL DEFAULT 0,
status TINYINT NOT NULL DEFAULT 1,
error_message TEXT,
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
INDEX idx_tenant_id (tenant_id),
INDEX idx_user_id (user_id),
INDEX idx_module (module),
INDEX idx_created_at (created_at)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
CREATE TABLE IF NOT EXISTS scheduled_tasks (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
tenant_id BIGINT NOT NULL DEFAULT 0,
task_name VARCHAR(100) NOT NULL,
task_code VARCHAR(100) NOT NULL UNIQUE,
task_type VARCHAR(20) NOT NULL COMMENT 'HTTP/METHOD/MQ',
cron_expression VARCHAR(50),
target_url VARCHAR(500),
target_method VARCHAR(100),
target_params TEXT,
mq_topic VARCHAR(100),
mq_tag VARCHAR(100),
retry_count INT DEFAULT 0,
timeout_seconds INT DEFAULT 30,
status VARCHAR(20) NOT NULL DEFAULT 'STOPPED',
last_execution_at DATETIME,
last_execution_status VARCHAR(20),
next_execution_at DATETIME,
remark TEXT,
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
deleted_at DATETIME,
INDEX idx_tenant_id (tenant_id),
INDEX idx_status (status),
INDEX idx_next_execution (next_execution_at)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
CREATE TABLE IF NOT EXISTS task_execution_logs (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
task_id BIGINT NOT NULL,
tenant_id BIGINT NOT NULL DEFAULT 0,
execution_no VARCHAR(50) NOT NULL,
started_at DATETIME NOT NULL,
finished_at DATETIME,
execution_time INT,
status VARCHAR(20) NOT NULL,
result_message TEXT,
retry_count INT DEFAULT 0,
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
INDEX idx_task_id (task_id),
INDEX idx_status (status),
INDEX idx_started_at (started_at)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
SELECT '数据库表创建完成' AS result;
EOSQL
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: erp-secrets
key: MYSQL_ROOT_PASSWORD
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
imagePullSecrets:
- name: regcred

View File

@ -0,0 +1,259 @@
apiVersion: v1
kind: Namespace
metadata:
name: erp-prod
labels:
name: erp-prod
environment: production
---
apiVersion: v1
kind: Namespace
metadata:
name: erp-infra
labels:
name: erp-infra
environment: production
---
# ============================================================
# 全局ERP ConfigMap
# ============================================================
apiVersion: v1
kind: ConfigMap
metadata:
name: erp-config
namespace: erp-prod
data:
# Nacos配置中心
NACOS_HOST: "nacos.erp-infra.svc.cluster.local"
NACOS_PORT: "8848"
NACOS_NAMESPACE: "prod"
NACOS_GROUP: "DEFAULT_GROUP"
# 数据库
DB_HOST: "mysql.erp-infra.svc.cluster.local"
DB_PORT: "3306"
DB_NAME: "erp_java"
DB_USERNAME: "erp_user"
# Redis
REDIS_HOST: "redis.erp-infra.svc.cluster.local"
REDIS_PORT: "6379"
REDIS_DB: "0"
# RocketMQ
ROCKETMQ_NAMESRV_ADDR: "rocketmq.erp-infra.svc.cluster.local:9876"
# Seata
SEATA_SERVER_ADDR: "seata.erp-infra.svc.cluster.local:8091"
# MinIO
MINIO_ENDPOINT: "http://minio.erp-infra.svc.cluster.local:9000"
MINIO_BUCKET: "erp"
# SkyWalking
SW_OAP_ADDR: "skywalking-oap.erp-infra.svc.cluster.local:11800"
# 应用公共配置
SPRING_PROFILES_ACTIVE: "prod"
JAVA_OPTS: "-Xms512m -Xmx1024m -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError"
LOG_LEVEL: "INFO"
ENABLE_PROMETHEUS: "true"
METRICS_PATH: "/actuator/prometheus"
---
# ============================================================
# 全局ERP Secrets生产环境请使用Vault/AWS Secrets Manager等外部密钥管理
# ============================================================
apiVersion: v1
kind: Secret
metadata:
name: erp-secrets
namespace: erp-prod
type: Opaque
stringData:
# 数据库
DB_PASSWORD: "REPLACE_WITH_DB_PASSWORD"
MYSQL_ROOT_PASSWORD: "REPLACE_WITH_ROOT_PASSWORD"
# Redis
REDIS_PASSWORD: "REPLACE_WITH_REDIS_PASSWORD"
# JWT
JWT_SECRET: "REPLACE_WITH_JWT_SECRET_MIN_32_CHARS"
# MinIO
MINIO_ACCESS_KEY: "REPLACE_WITH_MINIO_ACCESS_KEY"
MINIO_SECRET_KEY: "REPLACE_WITH_MINIO_SECRET_KEY"
# Nacos
NACOS_USERNAME: "nacos"
NACOS_PASSWORD: "REPLACE_WITH_NACOS_PASSWORD"
# Seata
SEATA_TX_VGROUP: "erp_tx_group"
SEATA_SECRET: "REPLACE_WITH_SEATA_SECRET"
---
# ============================================================
# 主Ingress - API网关入口
# ============================================================
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: erp-api-ingress
namespace: erp-prod
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/rate-limit-window: "1m"
nginx.ingress.kubernetes.io/enable-access-log: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- api.erpzbbh.cn
- erpzbbh.cn
secretName: erp-api-tls
rules:
- host: api.erpzbbh.cn
http:
paths:
# 用户服务
- path: /user
pathType: Prefix
backend:
service:
name: user-service
port:
name: http
# 认证服务
- path: /auth
pathType: Prefix
backend:
service:
name: user-service
port:
name: http
# 产品服务
- path: /product
pathType: Prefix
backend:
service:
name: product-service
port:
name: http
# 订单服务
- path: /order
pathType: Prefix
backend:
service:
name: order-service
port:
name: http
# 库存服务
- path: /inventory
pathType: Prefix
backend:
service:
name: inventory-service
port:
name: http
# 租户服务
- path: /tenant
pathType: Prefix
backend:
service:
name: tenant-service
port:
name: http
# 权限服务
- path: /permission
pathType: Prefix
backend:
service:
name: permission-service
port:
name: http
# 文件服务
- path: /file
pathType: Prefix
backend:
service:
name: file-service
port:
name: http
# 报表服务
- path: /report
pathType: Prefix
backend:
service:
name: report-service
port:
name: http
# 仪表盘服务
- path: /dashboard
pathType: Prefix
backend:
service:
name: dashboard-service
port:
name: http
# 定时任务服务
- path: /task
pathType: Prefix
backend:
service:
name: scheduled-task-service
port:
name: http
---
# ============================================================
# RabbitMQ Service供需要MQ的服务使用
# ============================================================
apiVersion: v1
kind: Service
metadata:
name: rocketmq
namespace: erp-infra
labels:
app: rocketmq
spec:
type: ClusterIP
ports:
- name: namesrv
port: 9876
targetPort: 9876
- name: broker
port: 10911
targetPort: 10911
selector:
app: rocketmq
---
# ============================================================
# SkyWalking Ingress
# ============================================================
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: skywalking-ui-ingress
namespace: erp-infra
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- skywalking.erpzbbh.cn
secretName: skywalking-ui-tls
rules:
- host: skywalking.erpzbbh.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: skywalking-ui
port:
name: http

View File

@ -0,0 +1,54 @@
# Kubernetes Kustomization - ERP全量服务统一部署
# 用法: kubectl apply -k infrastructure/kubernetes/overlays/prod
#
# 基础配置
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: erp-prod
resources:
# 全局基础设施
- ../base/erp-global-infra.yaml
# 数据库初始化Job首次部署时需要
# - ../base/erp-db-init-job.yaml
commonLabels:
app.kubernetes.io/name: erp
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/part-of: erp-java-backend
secretGenerator:
- name: erp-secrets
literals:
- DB_PASSWORD=REPLACE_WITH_DB_PASSWORD
- MYSQL_ROOT_PASSWORD=REPLACE_WITH_ROOT_PASSWORD
- REDIS_PASSWORD=REPLACE_WITH_REDIS_PASSWORD
- JWT_SECRET=REPLACE_WITH_JWT_SECRET_MIN_32_CHARS
options:
disableNameSuffixHash: true
configMapGenerator:
- name: erp-config
literals:
- NACOS_HOST=nacos
- NACOS_PORT=8848
- NACOS_NAMESPACE=prod
- DB_HOST=mysql
- DB_PORT=3306
- DB_NAME=erp_java
- DB_USERNAME=erp_user
- REDIS_HOST=redis
- REDIS_PORT=6379
options:
disableNameSuffixHash: true
replicas:
- name: gateway
count: 2
- name: user-service
count: 3
- name: product-service
count: 3
- name: order-service
count: 3

View File

@ -0,0 +1,95 @@
-- 创建Nacos配置数据库
CREATE DATABASE IF NOT EXISTS nacos_config CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
-- 创建ERP业务数据库
CREATE DATABASE IF NOT EXISTS erp_java CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
-- 创建用户表对应PHP系统的users表
USE erp_java;
-- 用户表
CREATE TABLE IF NOT EXISTS users (
id BIGINT PRIMARY KEY AUTO_INCREMENT COMMENT '用户ID',
tenant_id BIGINT NOT NULL DEFAULT 0 COMMENT '租户ID',
username VARCHAR(50) NOT NULL COMMENT '用户名',
email VARCHAR(100) UNIQUE COMMENT '邮箱',
phone VARCHAR(20) UNIQUE COMMENT '手机号',
password_hash VARCHAR(255) NOT NULL COMMENT '密码哈希',
real_name VARCHAR(50) COMMENT '真实姓名',
avatar VARCHAR(255) COMMENT '头像URL',
status TINYINT NOT NULL DEFAULT 1 COMMENT '状态0-禁用1-启用',
is_super_admin TINYINT NOT NULL DEFAULT 0 COMMENT '是否超级管理员',
last_login_at DATETIME COMMENT '最后登录时间',
last_login_ip VARCHAR(45) COMMENT '最后登录IP',
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间',
deleted_at DATETIME COMMENT '删除时间',
INDEX idx_tenant_id (tenant_id),
INDEX idx_username (username),
INDEX idx_email (email),
INDEX idx_phone (phone)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='用户表';
-- 租户表
CREATE TABLE IF NOT EXISTS tenants (
id BIGINT PRIMARY KEY AUTO_INCREMENT COMMENT '租户ID',
company_name VARCHAR(200) NOT NULL COMMENT '公司名称',
contact_name VARCHAR(50) NOT NULL COMMENT '联系人姓名',
contact_phone VARCHAR(20) NOT NULL COMMENT '联系人电话',
contact_email VARCHAR(100) NOT NULL COMMENT '联系人邮箱',
domain VARCHAR(100) UNIQUE COMMENT '租户域名',
package_id BIGINT COMMENT '套餐ID',
trial_ends_at DATETIME COMMENT '试用到期时间',
status TINYINT NOT NULL DEFAULT 1 COMMENT '状态0-暂停1-正常2-试用',
max_users INT DEFAULT 10 COMMENT '最大用户数',
max_orders_per_month INT DEFAULT 1000 COMMENT '每月最大订单数',
features JSON COMMENT '功能列表',
settings JSON COMMENT '配置设置',
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间',
INDEX idx_company_name (company_name),
INDEX idx_domain (domain),
INDEX idx_status (status)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='租户表';
-- 角色表
CREATE TABLE IF NOT EXISTS roles (
id BIGINT PRIMARY KEY AUTO_INCREMENT COMMENT '角色ID',
tenant_id BIGINT NOT NULL COMMENT '租户ID',
name VARCHAR(50) NOT NULL COMMENT '角色名称',
code VARCHAR(50) NOT NULL COMMENT '角色编码',
description VARCHAR(255) COMMENT '角色描述',
is_default TINYINT DEFAULT 0 COMMENT '是否默认角色',
permissions JSON COMMENT '权限列表',
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间',
UNIQUE KEY uk_tenant_code (tenant_id, code),
INDEX idx_tenant_id (tenant_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='角色表';
-- 用户角色关联表
CREATE TABLE IF NOT EXISTS user_roles (
id BIGINT PRIMARY KEY AUTO_INCREMENT COMMENT '关联ID',
user_id BIGINT NOT NULL COMMENT '用户ID',
role_id BIGINT NOT NULL COMMENT '角色ID',
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
UNIQUE KEY uk_user_role (user_id, role_id),
INDEX idx_user_id (user_id),
INDEX idx_role_id (role_id),
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE,
FOREIGN KEY (role_id) REFERENCES roles(id) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='用户角色关联表';
-- 插入默认数据
-- 插入超级管理员用户密码Admin@123456
INSERT IGNORE INTO users (id, username, email, password_hash, real_name, status, is_super_admin) VALUES
(1, 'admin', 'admin@erp.com', '$2a$10$YourPasswordHashHere', '系统管理员', 1, 1);
-- 插入默认租户
INSERT IGNORE INTO tenants (id, company_name, contact_name, contact_phone, contact_email, status) VALUES
(1, '演示公司', '演示联系人', '13800138000', 'demo@erp.com', 1);
-- 插入默认角色
INSERT IGNORE INTO roles (id, tenant_id, name, code, description, is_default, permissions) VALUES
(1, 1, '管理员', 'admin', '系统管理员', 1, '["*"]'),
(2, 1, '普通用户', 'user', '普通用户', 0, '["order:view", "order:create"]');

314
nacos/README.md Normal file
View File

@ -0,0 +1,314 @@
# Nacos 服务注册与发现中心
本文档说明 ERP 微服务架构中 Nacos 的配置与使用方法。
## 📁 目录结构
```
nacos/
├── README.md # 本文档
├── docker-compose.standalone.yml # 单机模式 Docker Compose
├── docker-compose.cluster.yml # 集群模式 Docker Compose (3节点)
├── init/
│ └── mysql-schema.sql # Nacos MySQL 持久化 Schema
├── cluster/
│ └── nginx.conf # Nacos 集群 Nginx 负载均衡配置
├── examples/
│ ├── nacos-api-usage.md # Nacos REST API 使用示例
│ ├── ServiceRegistrationExample.java # 服务注册示例代码
│ ├── HealthCheckConfig.java # 健康检查配置示例
│ └── client-config/
│ ├── nacos-client.properties # 通用 Nacos 客户端配置
│ ├── bootstrap.yml # Spring Cloud bootstrap 配置模板
│ └── user-service-nacos.yml # user-service 完整配置示例
└── scripts/
└── startup.sh # Nacos 启动脚本
```
## 🚀 快速开始
### 1. 启动 Nacos单机模式
```bash
cd /root/.openclaw/workspace/erp-java-backend/nacos
# 方式一:使用脚本
chmod +x scripts/startup.sh
./scripts/startup.sh --mode standalone
# 方式二:直接使用 Docker Compose
docker compose -f docker-compose.standalone.yml up -d
# 方式三:使用 docker-compose 命令
docker-compose -f docker-compose.standalone.yml up -d
```
### 2. 访问 Nacos 控制台
- **地址**: http://localhost:8848/nacos
- **用户名**: `nacos`
- **密码**: `nacos123456`
### 3. 验证服务注册
访问控制台 → **服务管理****服务列表**,应能看到已注册的服务。
---
## 🏗 架构说明
### 单机模式 (Standalone)
```
┌─────────────────┐
│ Microservices │
│ (Spring Boot) │
└───────┬─────────┘
│ HTTP/gRPC
┌─────────────────┐
│ Nacos Server │ ◄── 单节点
│ (v2.2.3) │
└───────┬─────────┘
┌─────────────────┐
│ MySQL │ ◄── 持久化存储
│ (8.0) │
└─────────────────┘
```
### 集群模式 (Cluster)
```
┌─────────────────┐
│ Nginx (LB) │
│ :8848, :9848 │
└───────┬─────────┘
┌───────────────────────┼───────────────────────┐
▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ Nacos Node 1 │ │ Nacos Node 2 │ │ Nacos Node 3 │
│ :8848 │◄────►│ :8848 │◄────►│ :8848 │
│ :9848 │ │ :9848 │ │ :9848 │
│ :9849 │ │ :9849 │ │ :9849 │
└───────┬───────┘ └───────┬───────┘ └───────┬───────┘
│ │ │
└───────────────────────┼───────────────────────┘
┌───────────────┐
│ MySQL │
│ (8.0) │
└───────────────┘
```
---
## ⚙️ 配置说明
### 环境变量
| 变量名 | 说明 | 默认值 |
|--------|------|--------|
| `NACOS_SERVER_ADDR` | Nacos 服务器地址 | `127.0.0.1:8848` |
| `NACOS_NAMESPACE` | 命名空间 ID | `public` |
| `NACOS_GROUP` | 分组名称 | `DEFAULT_GROUP` |
| `NACOS_USERNAME` | Nacos 用户名 | `nacos` |
| `NACOS_PASSWORD` | Nacos 密码 | `nacos123456` |
| `SPRING_PROFILES_ACTIVE` | Spring 环境 | `dev` |
### 微服务配置
#### 方式一:修改 application.yml
在各微服务的 `application.yml` 中添加:
```yaml
spring:
cloud:
nacos:
discovery:
server-addr: 127.0.0.1:8848
namespace: public
group: DEFAULT_GROUP
prefer-ip-address: true
cluster-name: DEFAULT
config:
server-addr: ${spring.cloud.nacos.discovery.server-addr}
file-extension: yaml
shared-configs:
- data-id: common-config.yaml
group: DEFAULT_GROUP
refresh: true
```
#### 方式二:使用 bootstrap.yml推荐
`examples/client-config/bootstrap.yml` 复制到微服务 `src/main/resources/`,并根据需要修改。
#### 方式三:环境变量覆盖
```bash
java -jar app.jar \
-DNACOS_SERVER_ADDR=127.0.0.1:8848 \
-DNACOS_NAMESPACE=public \
-DNACOS_GROUP=DEFAULT_GROUP
```
---
## 📋 微服务列表与配置
| 服务名 | 端口 | Context Path | 说明 |
|--------|------|--------------|------|
| `api-gateway` | 8080 | `/` | API网关 |
| `user-service` | 8082 | `/user` | 用户服务 |
| `tenant-service` | 8083 | `/tenant` | 租户服务 |
| `product-service` | 8084 | `/product` | 产品服务 |
| `order-service` | 8085 | `/order` | 订单服务 |
| `inventory-service` | 8086 | `/inventory` | 库存服务 |
| `finance-service` | 8087 | `/finance` | 财务服务 |
| `notification-service` | 8088 | `/notification` | 通知服务 |
| `file-service` | 8089 | `/file` | 文件服务 |
| `admin-service` | 8090 | `/admin` | 管理服务 |
---
## 🏥 健康检查配置
### 默认健康检查
Nacos 默认通过 **TCP 端口检测** 判断实例健康状态。
### 自定义健康检查(推荐)
使用 Spring Boot Actuator
```yaml
management:
endpoints:
web:
exposure:
include: health,info,metrics
endpoint:
health:
show-details: always
health:
nacos:
enabled: true
```
### 健康检查流程
```
1. 客户端每 5秒 向 Nacos 发送心跳
2. Nacos 15秒 未收到心跳,标记为不健康
3. Nacos 30秒 未收到心跳,删除实例
4. 服务恢复后,自动重新注册
```
---
## 🌐 Nacos API 调用
详细 API 调用示例见 [nacos-api-usage.md](examples/nacos-api-usage.md)
### 服务注册
```bash
curl -X POST 'http://127.0.0.1:8848/nacos/v1/ns/instance' \
-d 'serverIp=127.0.0.1' \
-d 'serverPort=8082' \
-d 'serviceName=user-service'
```
### 服务发现
```bash
curl -s 'http://127.0.0.1:8848/nacos/v1/ns/instance/list?serviceName=user-service'
```
### 配置管理
```bash
# 发布配置
curl -X POST 'http://127.0.0.1:8848/nacos/v1/cs/configs' \
-d 'dataId=user-service.yaml' \
-d 'group=DEFAULT_GROUP' \
-d 'content=spring: datasource: url: jdbc:mysql://localhost:3307'
# 查询配置
curl -s 'http://127.0.0.1:8848/nacos/v1/cs/configs?dataId=user-service.yaml&group=DEFAULT_GROUP'
```
---
## 🔐 安全配置
### 生产环境必改项
1. **修改默认密码**
```yaml
NACOS_AUTH_ENABLE: "true"
NACOS_AUTH_TOKEN: "YourNewSecretKeyHere..."
```
2. **启用认证**
```yaml
spring.cloud.nacos.discovery.username=nacos
spring.cloud.nacos.discovery.password=YourNewPassword
```
3. **网络隔离**
- 使用内网 IP不暴露公网
- 配置防火墙规则
- 使用 VIP/Nginx 做负载均衡
---
## 🚢 部署到 Kubernetes
集群模式配置见 [docker-compose.cluster.yml](docker-compose.cluster.yml)
### 关键配置
- **3节点部署**: 推荐奇数节点,保证 Raft 共识算法正常
- **MySQL 外部化**: 生产环境使用主从 MySQL
- **Nginx 负载均衡**: 统一入口,分发到 3 个 Nacos 节点
- **持久化**: 使用 PVC 挂载日志和数据目录
---
## ❓ 常见问题
### 1. 微服务无法注册到 Nacos
- 检查 Nacos 是否启动:`curl http://localhost:8848/nacos/v1/console/health/readiness`
- 检查网络连通性:`telnet localhost 8848`
- 检查防火墙:`firewall-cmd --list-ports`
### 2. 控制台登录失败
- 默认用户名/密码:`nacos` / `nacos123456`
- 检查认证是否启用:`NACOS_AUTH_ENABLE=true`
### 3. gRPC 端口无法连接
- Nacos 2.x 使用 9848/9849 端口进行 gRPC 通信
- 确保这些端口已开放
### 4. 配置不生效
- 检查 `refresh-enabled: true` 是否设置
- 使用 `@RefreshScope` 注解刷新配置
- 检查命名空间和分组是否匹配
---
## 📚 相关文档
- [Nacos 官方文档](https://nacos.io/zh-cn/docs/what-is-nacos.html)
- [Spring Cloud Alibaba Nacos Discovery](https://spring-cloud-alibaba-group.github.io/github-pages/2023.0.1.0/zh-cn/documentation/spring-cloud-alibaba-nacos-discovery.html)
- [Spring Cloud Alibaba Nacos Config](https://spring-cloud-alibaba-group.github.io/github-pages/2023.0.1.0/zh-cn/documentation/spring-cloud-alibaba-nacos-config.html)

128
nacos/cluster/nginx.conf Normal file
View File

@ -0,0 +1,128 @@
# Nginx Load Balancer Configuration for Nacos Cluster
# Nacos 2.x uses gRPC which requires HTTP/2 and long-lived connections
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# 日志格式
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
# 基本优化
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 100M;
# 上游Nacos集群
upstream nacos_cluster {
server nacos-server-1:8848 max_fails=3 fail_timeout=30s;
server nacos-server-2:8848 max_fails=3 fail_timeout=30s;
server nacos-server-3:8848 max_fails=3 fail_timeout=30s;
# 负载均衡策略
least_conn; # 最少连接优先
# 连接保活
keepalive 32;
}
# gRPC上游用于Nacos 2.x客户端gRPC通信
upstream nacos_grpc {
server nacos-server-1:9848 max_fails=3 fail_timeout=30s;
server nacos-server-2:9848 max_fails=3 fail_timeout=30s;
server nacos-server-3:9848 max_fails=3 fail_timeout=30s;
keepalive 32;
}
server {
listen 8848;
proxy_pass nacos_cluster;
# 超时配置Nacos需要长连接
proxy_connect_timeout 10s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 缓冲配置
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
# 头信息转发
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Nacos特定头
proxy_http_version 1.1;
proxy_set_header Connection "";
# 健康检查相关状态
location /nacos/v1/console/health/readiness {
proxy_pass http://nacos_cluster;
proxy_connect_timeout 5s;
proxy_send_timeout 10s;
proxy_read_timeout 10s;
access_log off;
}
location /nacos/v1/console/health/liveness {
proxy_pass http://nacos_cluster;
proxy_connect_timeout 5s;
proxy_send_timeout 10s;
proxy_read_timeout 10s;
access_log off;
}
}
# gRPC代理用于Nacos 2.x客户端通信
# 注意gRPC需要HTTP/2支持
server {
listen 9848 http2;
grpc_pass grpc://nacos_grpc;
# gRPC超时
grpc_connect_timeout 10s;
grpc_send_timeout 60s;
grpc_read_timeout 60s;
# 头信息
grpc_set_header Host $host;
grpc_set_header X-Real-IP $remote_addr;
grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# 另一个gRPC端口集群间通信9849
server {
listen 9849 http2;
grpc_pass grpc://nacos_grpc;
grpc_connect_timeout 10s;
grpc_send_timeout 60s;
grpc_read_timeout 60s;
grpc_set_header Host $host;
grpc_set_header X-Real-IP $remote_addr;
}
}

View File

@ -0,0 +1,194 @@
# Nacos Cluster Mode - Docker Compose
# 适用于生产环境推荐3节点及以上
version: '3.8'
services:
# ---------------------------------------------------------------
# MySQL for Nacos Cluster (生产环境建议使用外部MySQL主从)
# ---------------------------------------------------------------
nacos-mysql:
image: mysql:8.0
container_name: erp-nacos-mysql
environment:
MYSQL_ROOT_PASSWORD: root123456
MYSQL_DATABASE: nacos_config
MYSQL_USER: nacos
MYSQL_PASSWORD: nacos123456
ports:
- "3308:3306"
volumes:
- nacos_mysql_data:/var/lib/mysql
- ./init/mysql-schema.sql:/docker-entrypoint-initdb.d/mysql-schema.sql
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --default-authentication-plugin=mysql_native_password
networks:
- nacos-network
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-uroot", "-proot123456"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# ---------------------------------------------------------------
# Nacos Server Node 1
# ---------------------------------------------------------------
nacos-server-1:
image: nacos/nacos-server:v2.2.3
container_name: erp-nacos-1
hostname: nacos-server-1
environment:
MODE: cluster
NACOS_SERVERS: "nacos-server-1:8848 nacos-server-2:8848 nacos-server-3:8848"
MYSQL_SERVICE_HOST: nacos-mysql
MYSQL_SERVICE_DB_NAME: nacos_config
MYSQL_SERVICE_PORT: 3306
MYSQL_SERVICE_USER: nacos
MYSQL_SERVICE_PASSWORD: nacos123456
# 集群配置
NACOS_SERVER_IP: nacos-server-1
# JVM调优生产环境推荐2g+
JVM_XMS: 1g
JVM_XMX: 1g
JVM_XMN: 512m
# 认证配置生产环境务必修改SecretKey
NACOS_AUTH_ENABLE: "true"
NACOS_AUTH_TOKEN: "SecretKey012345678901234567890123456789012345678901234567890123456789"
NACOS_AUTH_IDENTITY_KEY: "serverIdentity"
NACOS_AUTH_IDENTITY_VALUE: "security"
ports:
- "8841:8848"
- "9841:9848"
- "9842:9849"
volumes:
- nacos1_logs:/home/nacos/logs
- nacos1_data:/home/nacos/data
depends_on:
nacos-mysql:
condition: service_healthy
networks:
- nacos-network
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8848/nacos/v1/console/health/readiness"]
interval: 15s
timeout: 10s
retries: 10
start_period: 60s
# ---------------------------------------------------------------
# Nacos Server Node 2
# ---------------------------------------------------------------
nacos-server-2:
image: nacos/nacos-server:v2.2.3
container_name: erp-nacos-2
hostname: nacos-server-2
environment:
MODE: cluster
NACOS_SERVERS: "nacos-server-1:8848 nacos-server-2:8848 nacos-server-3:8848"
MYSQL_SERVICE_HOST: nacos-mysql
MYSQL_SERVICE_DB_NAME: nacos_config
MYSQL_SERVICE_PORT: 3306
MYSQL_SERVICE_USER: nacos
MYSQL_SERVICE_PASSWORD: nacos123456
NACOS_SERVER_IP: nacos-server-2
JVM_XMS: 1g
JVM_XMX: 1g
JVM_XMN: 512m
NACOS_AUTH_ENABLE: "true"
NACOS_AUTH_TOKEN: "SecretKey012345678901234567890123456789012345678901234567890123456789"
NACOS_AUTH_IDENTITY_KEY: "serverIdentity"
NACOS_AUTH_IDENTITY_VALUE: "security"
ports:
- "8842:8848"
- "9843:9848"
- "9844:9849"
volumes:
- nacos2_logs:/home/nacos/logs
- nacos2_data:/home/nacos/data
depends_on:
nacos-mysql:
condition: service_healthy
networks:
- nacos-network
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8848/nacos/v1/console/health/readiness"]
interval: 15s
timeout: 10s
retries: 10
start_period: 60s
# ---------------------------------------------------------------
# Nacos Server Node 3
# ---------------------------------------------------------------
nacos-server-3:
image: nacos/nacos-server:v2.2.3
container_name: erp-nacos-3
hostname: nacos-server-3
environment:
MODE: cluster
NACOS_SERVERS: "nacos-server-1:8848 nacos-server-2:8848 nacos-server-3:8848"
MYSQL_SERVICE_HOST: nacos-mysql
MYSQL_SERVICE_DB_NAME: nacos_config
MYSQL_SERVICE_PORT: 3306
MYSQL_SERVICE_USER: nacos
MYSQL_SERVICE_PASSWORD: nacos123456
NACOS_SERVER_IP: nacos-server-3
JVM_XMS: 1g
JVM_XMX: 1g
JVM_XMN: 512m
NACOS_AUTH_ENABLE: "true"
NACOS_AUTH_TOKEN: "SecretKey012345678901234567890123456789012345678901234567890123456789"
NACOS_AUTH_IDENTITY_KEY: "serverIdentity"
NACOS_AUTH_IDENTITY_VALUE: "security"
ports:
- "8843:8848"
- "9845:9848"
- "9846:9849"
volumes:
- nacos3_logs:/home/nacos/logs
- nacos3_data:/home/nacos/data
depends_on:
nacos-mysql:
condition: service_healthy
networks:
- nacos-network
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8848/nacos/v1/console/health/readiness"]
interval: 15s
timeout: 10s
retries: 10
start_period: 60s
# ---------------------------------------------------------------
# Nginx as Load Balancer for Nacos Cluster
# ---------------------------------------------------------------
nacos-lb:
image: nginx:alpine
container_name: erp-nacos-lb
ports:
- "8848:8848"
- "9848:9848"
volumes:
- ./cluster/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- nacos-server-1
- nacos-server-2
- nacos-server-3
networks:
- nacos-network
restart: unless-stopped
volumes:
nacos_mysql_data:
nacos1_logs:
nacos1_data:
nacos2_logs:
nacos2_data:
nacos3_logs:
nacos3_data:
networks:
nacos-network:
driver: bridge

View File

@ -0,0 +1,63 @@
# ==============================================================
# Nacos Docker Compose Override
# 覆盖 /root/.openclaw/workspace/erp-java-backend/docker-compose.yml 中的 Nacos 配置
#
# 使用方式:
# cp docker-compose.override.example.yml ../../docker-compose.override.yml
# docker compose up -d nacos
# ==============================================================
version: '3.8'
services:
# 覆盖原有的 Nacos 配置(如果需要替换)
# nacos:
# image: nacos/nacos-server:v2.2.3
# container_name: erp-nacos
# environment:
# MODE: standalone
# SPRING_DATASOURCE_PLATFORM: mysql
# MYSQL_SERVICE_HOST: mysql
# MYSQL_SERVICE_DB_NAME: nacos_config
# MYSQL_SERVICE_PORT: 3306
# MYSQL_SERVICE_USER: nacos
# MYSQL_SERVICE_PASSWORD: nacos123456
# NACOS_AUTH_ENABLE: "true"
# NACOS_AUTH_TOKEN: "SecretKey012345678901234567890123456789012345678901234567890123456789"
# ports:
# - "8848:8848"
# - "9848:9848"
# - "9849:9849"
# volumes:
# - nacos_logs:/home/nacos/logs
# - nacos_data:/home/nacos/data
# depends_on:
# - mysql
# networks:
# - erp-network
# 新增Nacos 专用 MySQL可选
nacos-mysql:
image: mysql:8.0
container_name: erp-nacos-mysql
environment:
MYSQL_ROOT_PASSWORD: root123456
MYSQL_DATABASE: nacos_config
MYSQL_USER: nacos
MYSQL_PASSWORD: nacos123456
ports:
- "3308:3306"
volumes:
- nacos_mysql_data:/var/lib/mysql
- ./init/mysql-schema.sql:/docker-entrypoint-initdb.d/mysql-schema.sql
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --default-authentication-plugin=mysql_native_password
networks:
- erp-network
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-uroot", "-proot123456"]
interval: 10s
timeout: 5s
retries: 5
volumes:
nacos_mysql_data:

View File

@ -0,0 +1,79 @@
# Nacos Standalone Mode - Docker Compose
# 适用于开发/测试环境,单节点部署
version: '3.8'
services:
# MySQL for Nacos Persistence (可选如果使用内置Derby可以注释掉)
nacos-mysql:
image: mysql:8.0
container_name: erp-nacos-mysql
environment:
MYSQL_ROOT_PASSWORD: root123456
MYSQL_DATABASE: nacos_config
MYSQL_USER: nacos
MYSQL_PASSWORD: nacos123456
ports:
- "3308:3306"
volumes:
- nacos_mysql_data:/var/lib/mysql
- ./init/mysql-schema.sql:/docker-entrypoint-initdb.d/mysql-schema.sql
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --default-authentication-plugin=mysql_native_password
networks:
- nacos-network
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-uroot", "-proot123456"]
interval: 10s
timeout: 5s
retries: 5
# Nacos Server (Standalone with MySQL)
nacos-server:
image: nacos/nacos-server:v2.2.3
container_name: erp-nacos-server
environment:
# 运行模式standalone 单机 / cluster 集群
MODE: standalone
# MySQL持久化配置
SPRING_DATASOURCE_PLATFORM: mysql
MYSQL_SERVICE_HOST: nacos-mysql
MYSQL_SERVICE_DB_NAME: nacos_config
MYSQL_SERVICE_PORT: 3306
MYSQL_SERVICE_USER: nacos
MYSQL_SERVICE_PASSWORD: nacos123456
# 认证配置(生产环境务必修改)
NACOS_AUTH_ENABLE: "true"
NACOS_AUTH_TOKEN: "SecretKey012345678901234567890123456789012345678901234567890123456789"
NACOS_AUTH_IDENTITY_KEY: "serverIdentity"
NACOS_AUTH_IDENTITY_VALUE: "security"
# JVM调优开发环境可适当降低
JVM_XMS: 512m
JVM_XMX: 512m
JVM_XMN: 256m
ports:
- "8848:8848" # 主端口
- "9848:9848" # gRPC端口客户端gRPC请求
- "9849:9849" # gRPC端口集群间通信
volumes:
- nacos_logs:/home/nacos/logs
- nacos_data:/home/nacos/data
depends_on:
nacos-mysql:
condition: service_healthy
networks:
- nacos-network
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8848/nacos/v1/console/health/readiness"]
interval: 15s
timeout: 10s
retries: 10
start_period: 30s
volumes:
nacos_mysql_data:
nacos_logs:
nacos_data:
networks:
nacos-network:
driver: bridge

View File

@ -0,0 +1,123 @@
package com.erp.common.nacos;
import com.alibaba.nacos.api.annotation.NacosProperties;
import com.alibaba.nacos.api.common.pojo.Instance;
import com.alibaba.nacos.common.utils.JacksonUtils;
import com.fasterxml.jackson.databind.ObjectMapper;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;
import org.springframework.cloud.alibaba.nacos.NacosDiscoveryProperties;
import org.springframework.cloud.alibaba.nacos.registry.NacosServiceRegistration;
import org.springframework.stereotype.Component;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
/**
* Nacos健康检查配置
*
* Nacos 2.x 支持多种健康检查类型:
* 1. TCP - 检测端口是否可连接
* 2. HTTP - 发送HTTP请求检测
* 3. MYSQL - 检测MySQL连接
* 4. NONE - 不健康不摘除
*
* 默认使用Spring Boot Actuator的/health端点作为健康检查
*/
@Slf4j
@Component("nacosHealthIndicator")
public class NacosHealthCheckConfig implements HealthIndicator {
private final NacosDiscoveryProperties nacosDiscoveryProperties;
@Value("${spring.application.name:unknown}")
private String serviceName;
// 存储服务状态实际生产中由Nacos服务端维护
private static final Map<String, Boolean> SERVICE_HEALTH_STATUS = new ConcurrentHashMap<>();
public NacosHealthCheckConfig(NacosDiscoveryProperties nacosDiscoveryProperties) {
this.nacosDiscoveryProperties = nacosDiscoveryProperties;
}
@Override
public Health health() {
// 1. 检查Nacos连接
if (!isNacosReachable()) {
return Health.down()
.withDetail("nacos", "无法连接到Nacos服务器")
.build();
}
// 2. 检查实例是否已注册
if (!isInstanceRegistered()) {
return Health.down()
.withDetail("registration", "实例未注册到Nacos")
.build();
}
// 3. 检查本地健康状态
if (!isLocalHealthy()) {
return Health.down()
.withDetail("local", "本地健康检查失败")
.build();
}
// 4. 获取实例元数据
Map<String, String> metadata = nacosDiscoveryProperties.getInstanceMetadataMap();
return Health.up()
.withDetail("service", serviceName)
.withDetail("instanceId", nacosDiscoveryProperties.getInstanceId())
.withDetail("namespace", nacosDiscoveryProperties.getNamespace())
.withDetail("group", nacosDiscoveryProperties.getGroup())
.withDetail("metadata", metadata)
.withDetail("nacos", "连接正常")
.build();
}
private boolean isNacosReachable() {
try {
// 简单连通性检查
return true; // 实际生产中应该尝试连接
} catch (Exception e) {
log.error("Nacos连接检查失败", e);
return false;
}
}
private boolean isInstanceRegistered() {
return nacosDiscoveryProperties.getInstanceId() != null;
}
private boolean isLocalHealthy() {
// 这里可以添加自定义健康检查逻辑
// 例如检查数据库连接Redis连接MQ连接等
return SERVICE_HEALTH_STATUS.getOrDefault(serviceName, true);
}
/**
* 更新服务健康状态可被外部调用
*/
public void setHealthy(String serviceName, boolean healthy) {
SERVICE_HEALTH_STATUS.put(serviceName, healthy);
log.info("更新服务健康状态: service={}, healthy={}", serviceName, healthy);
}
/**
* 获取当前服务的所有实例健康状态
*/
public void logAllInstancesHealth() {
log.info("========== 服务实例健康状态 ==========");
log.info("服务: {}", serviceName);
log.info("命名空间: {}", nacosDiscoveryProperties.getNamespace());
log.info("分组: {}", nacosDiscoveryProperties.getGroup());
log.info("实例ID: {}", nacosDiscoveryProperties.getInstanceId());
log.info("实例IP: {}", nacosDiscoveryProperties.getIp());
log.info("实例端口: {}", nacosDiscoveryProperties.getPort());
log.info("元数据: {}", nacosDiscoveryProperties.getInstanceMetadataMap());
log.info("========================================");
}
}

View File

@ -0,0 +1,137 @@
package com.erp.common.nacos;
import com.alibaba.cloud.nacos.NacosServiceManager;
import com.alibaba.nacos.api.naming.pojo.Instance;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.cloud.client.ServiceInstance;
import org.springframework.cloud.client.discovery.DiscoveryClient;
import org.springframework.cloud.client.serviceregistry.Registration;
import org.springframework.stereotype.Component;
import java.util.List;
import java.util.Map;
import java.util.Optional;
/**
* Nacos服务注册与发现工具类
* 提供服务元数据管理健康检查实例查询等高级功能
*/
@Slf4j
@Component
@RequiredArgsConstructor
public class NacosServiceUtil {
private final DiscoveryClient discoveryClient;
private final Optional<Registration> registration;
private final NacosServiceManager nacosServiceManager;
// ==============================================================
// 1. 服务注册状态检查
// ==============================================================
/**
* 检查当前服务是否已注册到Nacos
*/
public boolean isRegistered() {
return registration.isPresent();
}
/**
* 获取当前服务实例ID
*/
public String getInstanceId() {
return registration.map(r -> r.getInstanceId()).orElse("unknown");
}
/**
* 获取当前服务实例信息
*/
public Map<String, String> getMetadata() {
return registration.map(Registration::getMetadata).orElse(Map.of());
}
// ==============================================================
// 2. 服务发现
// ==============================================================
/**
* 根据服务名获取所有健康实例
*/
public List<ServiceInstance> getHealthyInstances(String serviceName) {
return discoveryClient.getInstances(serviceName).stream()
.filter(this::isHealthyInstance)
.toList();
}
/**
* 根据服务名获取所有实例包括不健康的
*/
public List<ServiceInstance> getAllInstances(String serviceName) {
return discoveryClient.getInstances(serviceName);
}
/**
* 检查目标服务是否可用至少有一个健康实例
*/
public boolean isServiceAvailable(String serviceName) {
return !getHealthyInstances(serviceName).isEmpty();
}
private boolean isHealthyInstance(ServiceInstance instance) {
if (instance instanceof Instance nacosInstance) {
return nacosInstance.isHealthy();
}
return true; // 非Nacos实例默认健康
}
// ==============================================================
// 3. 服务元数据管理
// ==============================================================
/**
* 更新当前服务实例元数据
* 注意需要Nacos 2.x客户端支持
*/
public void updateMetadata(Map<String, String> metadata) {
registration.ifPresent(r -> {
log.info("更新服务元数据: service={}, metadata={}", r.getServiceId(), metadata);
// 元数据通过配置文件更新后Nacos客户端会自动同步
});
}
/**
* 获取指定服务的所有实例及其元数据
*/
public void printServiceDetails(String serviceName) {
List<ServiceInstance> instances = getAllInstances(serviceName);
log.info("服务 [{}] 实例信息 (共{}个):", serviceName, instances.size());
for (ServiceInstance instance : instances) {
log.info(" - InstanceId: {}", instance.getInstanceId());
log.info(" Host: {}:{}", instance.getHost(), instance.getPort());
log.info(" Metadata: {}", instance.getMetadata());
log.info(" Scheme: {}", instance.getScheme());
}
}
// ==============================================================
// 4. 健康检查相关
// ==============================================================
/**
* 发送自定义心跳用于临时实例或特殊保活需求
*/
public void sendHeartbeat(String serviceName, String ip, int port) {
log.debug("发送心跳: service={}, ip={}, port={}", serviceName, ip, port);
// Nacos客户端自动发送心跳通常不需要手动调用
// 此方法用于理解心跳机制
}
/**
* 获取Nacos客户端版本
*/
public String getNacosClientVersion() {
return nacosServiceManager.getNacosClientProperties().getProperty("nacos.client.version", "unknown");
}
}

View File

@ -0,0 +1,102 @@
# ==============================================================
# Nacos Configuration Bootstrap
# Spring Boot 3.x 使用 bootstrap.yml 而非 bootstrap.properties
# ==============================================================
spring:
application:
name: ${SERVICE_NAME:user-service}
cloud:
# ----------------- 服务发现注册到Nacos-----------------
nacos:
discovery:
enabled: true
server-addr: ${NACOS_SERVER_ADDR:127.0.0.1:8848}
namespace: ${NACOS_NAMESPACE:public}
group: ${NACOS_GROUP:DEFAULT_GROUP}
instance-count: 1
prefer-ip-address: true
# IP和端口容器环境自动检测VM环境可手动指定
# ip: ${SERVICE_IP:127.0.0.1}
# port: ${SERVER_PORT:8082}
# 健康检查配置
heart-beat-interval: 5000
heart-beat-timeout: 15000
# 实例元数据
metadata:
version: ${SERVICE_VERSION:1.0.0}
environment: ${SPRING_PROFILES_ACTIVE:dev}
description: ${SERVICE_DESCRIPTION:用户服务}
protocol: http
# 负载均衡权重
weight: 1.0
# 开启实例预热(防止流量瞬间涌入新实例)
instance-enabled-preheat: true
# 订阅(自动发现依赖服务变化)
subscribe: true
# ----------------- 配置中心从Nacos读取配置-----------------
config:
enabled: true
server-addr: ${spring.cloud.nacos.discovery.server-addr}
namespace: ${NACOS_NAMESPACE:public}
group: ${NACOS_GROUP:DEFAULT_GROUP}
file-extension: yaml
# 共享配置(公共配置)
shared-configs:
- data-id: common-config.yaml
group: DEFAULT_GROUP
refresh: true
- data-id: redis-config.yaml
group: DEFAULT_GROUP
refresh: true
- data-id: mysql-config.yaml
group: DEFAULT_GROUP
refresh: false
# 扩展配置(可选)
extension-configs:
- data-id: ${spring.application.name}-${spring.profiles.active}.yaml
group: ${NACOS_GROUP:DEFAULT_GROUP}
refresh: true
# 刷新机制
refresh-enabled: true
# 认证信息
username: ${NACOS_USERNAME:nacos}
password: ${NACOS_PASSWORD:nacos123456}
# ----------------- 环境变量优先级 -----------------
# 可通过 -DNACOS_SERVER_ADDR=xxx 覆盖
profiles:
active: ${SPRING_PROFILES_ACTIVE:dev}
# ==============================================================
# Actuator健康检查配置
# ==============================================================
management:
endpoints:
web:
exposure:
include: health,info,metrics,env,prometheus
base-path: /actuator
endpoint:
health:
show-details: always
probes:
enabled: true
path: /actuator/health
info:
enabled: true
health:
nacos:
enabled: true
redis:
enabled: true
db:
enabled: true
metrics:
export:
prometheus:
enabled: true
tags:
application: ${spring.application.name}

View File

@ -0,0 +1,77 @@
# ==============================================================
# Nacos Client Configuration - Spring Cloud Alibaba
# 用于所有微服务的公共配置
# Spring Boot 3.x + Spring Cloud 2023.x + Nacos 2.2.x
# ==============================================================
# ---------------------------------------------------------------
# Spring Cloud Alibaba Nacos Discovery (服务注册与发现)
# ---------------------------------------------------------------
spring.cloud.nacos.discovery.enabled=true
# Nacos服务器地址集群模式用VIP/域名)
spring.cloud.nacos.discovery.server-addr=${NACOS_SERVER_ADDR:127.0.0.1:8848}
# 命名空间用于环境隔离dev/test/prod
spring.cloud.nacos.discovery.namespace=${NACOS_NAMESPACE:public}
# 分组(用于业务隔离)
spring.cloud.nacos.discovery.group=${NACOS_GROUP:DEFAULT_GROUP}
# 注册实例数(单机模式=1集群模式可>1
spring.cloud.nacos.discovery.instance-count=1
# prefer IP: 是否优先使用IP而非hostname
spring.cloud.nacos.discovery.prefer-ip-address=true
# IP:优先使用指定IP注册
# spring.cloud.nacos.discovery.ip=${SERVICE_IP:127.0.0.1}
# 端口:优先使用指定端口注册
# spring.cloud.nacos.discovery.port=${SERVER_PORT:8080}
# 心跳间隔默认5秒
spring.cloud.nacos.discovery.heart-beat-interval=5000
# 心跳超时默认15秒服务15秒没响应认为不健康
spring.cloud.nacos.discovery.heart-beat-timeout=15000
# 实例元数据更新间隔默认30秒
spring.cloud.nacos.discovery.metadata-refresh-interval=30000
# 是否开启优雅下线默认true
spring.cloud.nacos.discovery.enabled-healthy-rule=true
# 负载均衡权重默认1.0
spring.cloud.nacos.discovery.weight=1.0
# ---------------------------------------------------------------
# Spring Cloud Alibaba Nacos Config (配置中心)
# ---------------------------------------------------------------
spring.cloud.nacos.config.enabled=true
spring.cloud.nacos.config.server-addr=${spring.cloud.nacos.discovery.server-addr}
spring.cloud.nacos.config.namespace=${NACOS_NAMESPACE:public}
spring.cloud.nacos.config.group=${NACOS_GROUP:DEFAULT_GROUP}
# 配置文件扩展名
spring.cloud.nacos.config.file-extension=yaml
# 共享配置多个微服务共享的配置data_id
spring.cloud.nacos.config.shared-configs[0]=common-config.yaml
spring.cloud.nacos.config.shared-configs[1]=redis-config.yaml
# 刷新间隔默认3000ms
spring.cloud.nacos.config.refresh-enabled=true
# ---------------------------------------------------------------
# Nacos认证生产环境必须配置
# ---------------------------------------------------------------
spring.cloud.nacos.discovery.username=${NACOS_USERNAME:nacos}
spring.cloud.nacos.discovery.password=${NACOS_PASSWORD:nacos123456}
spring.cloud.nacos.config.username=${NACOS_USERNAME:nacos}
spring.cloud.nacos.config.password=${NACOS_PASSWORD:nacos123456}
# ---------------------------------------------------------------
# Actuator健康检查端点配合Nacos健康检查
# ---------------------------------------------------------------
management.endpoints.web.exposure.include=health,info,metrics,env
management.endpoint.health.show-details=always
management.health.nacos.enabled=true
# 健康检查路径
management.endpoint.health.probes.enabled=true
management.endpoint.health.path=/actuator/health
# Prometheus指标可选用于监控
management.metrics.export.prometheus.enabled=true
management.metrics.export.prometheus.additional-location=classpath:deployment/prometheus.yml
# ---------------------------------------------------------------
# 服务注册元数据(自定义元数据,可用于路由、标签等)
# ---------------------------------------------------------------
# spring.cloud.nacos.discovery.metadata.version=${SERVICE_VERSION:v1.0.0}
# spring.cloud.nacos.discovery.metadata.env=${SPRING_PROFILES_ACTIVE:dev}
# spring.cloud.nacos.discovery.metadata.region=${REGION:cn-east}

View File

@ -0,0 +1,216 @@
# ==============================================================
# user-service Nacos配置示例
# 完整配置,复制到 src/main/resources/application.yml
# ==============================================================
server:
port: ${SERVER_PORT:8082}
servlet:
context-path: /user
tomcat:
threads:
max: 200
min-spare: 10
max-connections: 10000
accept-count: 100
spring:
application:
name: user-service
# 数据源配置
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://${MYSQL_HOST:localhost}:${MYSQL_PORT:3307}/erp_java?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=Asia/Shanghai&allowPublicKeyRetrieval=true
username: ${MYSQL_USER:erp_user}
password: ${MYSQL_PASSWORD:erp123456}
hikari:
connection-timeout: 30000
maximum-pool-size: 20
minimum-idle: 5
idle-timeout: 600000
max-lifetime: 1800000
connection-test-query: SELECT 1
pool-name: UserServiceHikariPool
# Redis配置
redis:
host: ${REDIS_HOST:localhost}
port: ${REDIS_PORT:6379}
password: ${REDIS_PASSWORD:redis123456}
database: 0
timeout: 3000ms
lettuce:
pool:
max-active: 20
max-idle: 10
min-idle: 5
# MyBatis配置
mybatis-plus:
configuration:
map-underscore-to-camel-case: true
log-impl: org.apache.ibatis.logging.stdout.StdOutImpl
global-config:
db-config:
logic-delete-field: deletedAt
logic-delete-value: NOW()
logic-not-delete-value: NULL
# ==============================================================
# Spring Cloud Alibaba Nacos 配置(核心部分)
# ==============================================================
cloud:
nacos:
# ---------- 服务发现与注册 ----------
discovery:
# Nacos服务器地址集群模式用VIP
server-addr: ${NACOS_SERVER_ADDR:127.0.0.1:8848}
# 命名空间(环境隔离)
namespace: ${NACOS_NAMESPACE:public}
# 分组(业务隔离)
group: ${NACOS_GROUP:DEFAULT_GROUP}
# 实例ID格式${ip}#${port}#${cluster}#${serviceName}#${namespace}
instance-id: ${spring.cloud.nacos.discovery.ip}#${server.port}#DEFAULT#${spring.application.name}#${spring.cloud.nacos.discovery.namespace}
# 是否优先使用IP注册
prefer-ip-address: true
# IP地址容器环境自动检测VM环境可手动指定
# ip: ${SERVICE_IP:127.0.0.1}
# 端口(通常不用指定,会自动检测)
# port: ${SERVER_PORT:8082}
# 集群名称
cluster-name: DEFAULT
# 权重(负载均衡用)
weight: 1.0
# 实例数量(单机=1
instance-count: 1
# 是否开启实例预热(防止流量瞬间涌入)
instance-enabled-preheat: true
# ---------- 心跳配置 ----------
# 心跳间隔默认5秒
heart-beat-interval: 5000
# 心跳超时时间默认15秒
heart-beat-timeout: 15000
# 实例删除超时默认30秒
ip-delete-timeout: 30
# 是否开启优雅下线
enabled-healthy-rule: true
# ---------- 实例元数据 ----------
metadata:
version: ${SERVICE_VERSION:1.0.0}
environment: ${SPRING_PROFILES_ACTIVE:dev}
description: 用户服务 - 用户管理、认证、权限
owner: erp-team
protocol: http
tags: auth,user,permission
# ---------- 订阅 ----------
# 是否订阅服务变化(服务发现)
subscribe: true
# ---------- 配置中心 ----------
config:
enabled: true
server-addr: ${spring.cloud.nacos.discovery.server-addr}
namespace: ${NACOS_NAMESPACE:public}
group: ${NACOS_GROUP:DEFAULT_GROUP}
file-extension: yaml
# 刷新机制
refresh-enabled: true
# 共享配置
shared-configs:
- data-id: common-config.yaml
group: DEFAULT_GROUP
refresh: true
- data-id: redis-config.yaml
group: DEFAULT_GROUP
refresh: true
# 扩展配置
extension-configs:
- data-id: ${spring.application.name}-${spring.profiles.active}.yaml
group: ${NACOS_GROUP:DEFAULT_GROUP}
refresh: true
# ---------- 认证信息 ----------
username: ${NACOS_USERNAME:nacos}
password: ${NACOS_PASSWORD:nacos123456}
# profiles环境
profiles:
active: ${SPRING_PROFILES_ACTIVE:dev}
# ==============================================================
# Actuator健康检查配置
# ==============================================================
management:
endpoints:
web:
exposure:
include: health,info,metrics,env,prometheus
base-path: /actuator
endpoint:
health:
show-details: always
probes:
enabled: true
path: /actuator/health
info:
enabled: true
health:
nacos:
enabled: true
redis:
enabled: true
db:
enabled: true
metrics:
export:
prometheus:
enabled: true
tags:
application: ${spring.application.name}
# ==============================================================
# 日志配置
# ==============================================================
logging:
level:
com.erp.user: ${LOG_LEVEL_USER:DEBUG}
org.springframework.cloud.alibaba.nacos: ${LOG_LEVEL_NACOS:INFO}
com.alibaba.nacos: ${LOG_LEVEL_NACOS_CLIENT:INFO}
pattern:
console: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] [%X{traceId}] %-5level %logger{36} - %msg%n"
# ==============================================================
# JWT配置
# ==============================================================
jwt:
secret: ${JWT_SECRET:erp-java-backend-secret-key-for-jwt-signing-must-be-at-least-256-bits-long-2026}
access-token-expiration: 900
refresh-token-expiration: 604800
remember-me-expiration: 2592000
# ==============================================================
# 安全配置
# ==============================================================
security:
max-login-attempts: 5
lock-duration-minutes: 30
attempt-expire-minutes: 60
max-reset-attempts: 3
reset-attempt-expire-minutes: 60
reset-token-expiry-minutes: 30
password-history-size: 5
# ==============================================================
# 服务配置
# ==============================================================
service:
auth:
enabled: true
rate-limit:
enabled: true
capacity: 100
refill-rate: 10

View File

@ -0,0 +1,162 @@
# ==============================================================
# Nacos API Examples - 服务注册、发现与元数据管理
# Nacos Server: http://127.0.0.1:8848
# Base Path: /nacos/v1
# 默认用户名/密码: nacos / nacos123456
# ==============================================================
# ==============================================================
# 一、服务注册 (Service Registration)
# ==============================================================
# 1.1 注册服务实例
# POST /nacos/v1/ns/instance
curl -X POST 'http://127.0.0.1:8848/nacos/v1/ns/instance' \
-d 'serverIp=127.0.0.1' \
-d 'serverPort=8082' \
-d 'serviceName=user-service' \
-d 'weight=1.0' \
-d 'enabled=true' \
-d 'healthy=true' \
-d 'ephemeral=true' \
-d 'clusterName=DEFAULT' \
-d 'namespaceId=public' \
-d 'instanceId=127.0.0.1#8082#DEFAULT#user-service#public'
# 1.2 注销服务实例
# DELETE /nacos/v1/ns/instance
curl -X DELETE 'http://127.0.0.1:8848/nacos/v1/ns/instance?serviceName=user-service&instanceId=127.0.0.1#8082#DEFAULT#user-service#public&clusterName=DEFAULT&namespaceId=public'
# 1.3 发送心跳(保活)
# PUT /nacos/v1/ns/instance/beat
curl -X PUT 'http://127.0.0.1:8848/nacos/v1/ns/instance/beat' \
-d 'serviceName=user-service' \
-d 'clusterName=DEFAULT' \
-d 'ip=127.0.0.1' \
-d 'port=8082' \
-d 'beatInterval=5000'
# ==============================================================
# 二、服务发现 (Service Discovery)
# ==============================================================
# 2.1 查询服务列表
# GET /nacos/v1/ns/instance/list?serviceName=xxx
curl -s 'http://127.0.0.1:8848/nacos/v1/ns/instance/list?serviceName=user-service&namespaceId=public&clusters=DEFAULT'
# 2.2 查询服务详情
# GET /nacos/v1/ns/service/list?pageNo=1&pageSize=10
curl -s 'http://127.0.0.1:8848/nacos/v1/ns/service/list?pageNo=1&pageSize=100&namespaceId=public'
# 2.3 查询服务详情(单个服务)
# GET /nacos/v1/ns/instance?serviceName=xxx
curl -s 'http://127.0.0.1:8848/nacos/v1/ns/instance?serviceName=user-service&namespaceId=public&clusterName=DEFAULT&healthyOnly=true'
# 2.4 查询服务元数据
# GET /nacos/v1/ns/serviceMetadata?serviceName=xxx
curl -s 'http://127.0.0.1:8848/nacos/v1/ns/instance?serviceName=user-service&namespaceId=public'
# ==============================================================
# 三、服务元数据管理 (Service Metadata)
# ==============================================================
# 3.1 更新服务元数据
# PUT /nacos/v1/ns/serviceMetadata
curl -X PUT 'http://127.0.0.1:8848/nacos/v1/ns/serviceMetadata' \
-H 'Content-Type: application/json' \
-d '{
"serviceName": "user-service",
"namespaceId": "public",
"clusterName": "DEFAULT",
"metadata": {
"version": "1.0.0",
"environment": "prod",
"description": "用户服务",
"owner": "dev-team",
"protocol": "http",
"tags": "auth,user,login"
},
"protectThreshold": 0.5,
"healthChecker": {
"type": "TCP"
}
}'
# 3.2 查询服务元数据
# GET /nacos/v1/ns/service?serviceName=xxx
curl -s 'http://127.0.0.1:8848/nacos/v1/ns/service?serviceName=user-service&namespaceId=public'
# ==============================================================
# 四、健康检查配置 (Health Check Configuration)
# ==============================================================
# 4.1 查询健康检查器类型
# 支持类型: TCP, HTTP, MYSQL, AST
curl -s 'http://127.0.0.1:8848/nacos/v1/ns/operator/metrics?target=health'
# ==============================================================
# 五、配置管理 (Config Management)
# ==============================================================
# 5.1 发布配置
# POST /nacos/v1/cs/configs
curl -X POST 'http://127.0.0.1:8848/nacos/v1/cs/configs' \
-d 'dataId=user-service.yaml' \
-d 'group=DEFAULT_GROUP' \
-d 'namespaceId=public' \
-d 'content=spring:
datasource:
url: jdbc:mysql://127.0.0.1:3307/erp_java
username: erp_user
password: erp123456' \
-d 'type=yaml'
# 5.2 查询配置
# GET /nacos/v1/cs/configs?dataId=xxx&group=xxx
curl -s 'http://127.0.0.1:8848/nacos/v1/cs/configs?dataId=user-service.yaml&group=DEFAULT_GROUP&namespaceId=public'
# 5.3 删除配置
# DELETE /nacos/v1/cs/configs
curl -X DELETE 'http://127.0.0.1:8848/nacos/v1/cs/configs?dataId=user-service.yaml&group=DEFAULT_GROUP&namespaceId=public'
# 5.4 监听配置变化
# POST /nacos/v1/cs/configs/listener
curl -X POST 'http://127.0.0.1:8848/nacos/v1/cs/configs/listener' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'Listening-Configs=dataId=user-service.yaml&group=DEFAULT_GROUP&namespaceId=public&contentMD5=xxx'
# ==============================================================
# 六、命名空间管理 (Namespace)
# ==============================================================
# 6.1 创建命名空间
# POST /nacos/v1/console/namespace
curl -X POST 'http://127.0.0.1:8848/nacos/v1/console/namespace' \
-d 'customNamespaceId=dev' \
-d 'namespaceName=开发环境' \
-d 'namespaceDesc=开发测试环境'
# 6.2 查询命名空间列表
# GET /nacos/v1/console/namespaces
curl -s 'http://127.0.0.1:8848/nacos/v1/console/namespaces'
# ==============================================================
# 七、集群管理 (Cluster)
# ==============================================================
# 7.1 查询集群节点列表
# GET /nacos/v1/ns/operator/servers
curl -s 'http://127.0.0.1:8848/nacos/v1/ns/operator/servers?namespaceId=public'
# 7.2 查询当前节点Leader状态
curl -s 'http://127.0.0.1:8848/nacos/v1/ns/raft/leader'
# 7.3 健康检查
curl -s 'http://127.0.0.1:8848/nacos/v1/console/health/readiness'
curl -s 'http://127.0.0.1:8848/nacos/v1/console/health/liveness'

View File

@ -0,0 +1,100 @@
# ==============================================================
# Nacos 共享配置 - common-config.yaml
# 所有微服务共享的配置
# ==============================================================
spring:
# Jackson 配置
jackson:
date-format: yyyy-MM-dd HH:mm:ss
time-zone: Asia/Shanghai
serialization:
write-dates-as-timestamps: false
fail-on-empty-beans: false
deserialization:
fail-on-unknown-properties: false
# HTTP 配置
http:
encoding:
charset: UTF-8
enabled: true
force: true
# ==============================================================
# Actuator 公共配置
# ==============================================================
management:
endpoints:
web:
exposure:
include: health,info,metrics,env,prometheus
base-path: /actuator
endpoint:
health:
show-details: always
metrics:
tags:
application: ${spring.application.name}
# ==============================================================
# 日志公共配置
# ==============================================================
logging:
pattern:
console: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] [%X{traceId:-}] %-5level %logger{36} - %msg%n"
file: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] [%X{traceId:-}] %-5level %logger{36} - %msg%n"
level:
root: INFO
com.erp: DEBUG
org.springframework: INFO
org.springframework.cloud.alibaba.nacos: INFO
com.alibaba.nacos: INFO
# ==============================================================
# OpenFeign 配置
# ==============================================================
feign:
client:
config:
default:
connectTimeout: 10000
readTimeout: 30000
loggerLevel: basic
compression:
request:
enabled: true
response:
enabled: true
# ==============================================================
# Resilience4j 熔断器配置
# ==============================================================
resilience4j:
circuitbreaker:
instances:
default:
slidingWindowSize: 10
failureRateThreshold: 50
waitDurationInOpenState: 10s
permittedNumberOfCallsInHalfOpenState: 3
retry:
instances:
default:
maxAttempts: 3
waitDuration: 500ms
enableExponentialBackoff: true
exponentialBackoffMultiplier: 2
# ==============================================================
# Swagger/API文档配置
# ==============================================================
springdoc:
api-docs:
enabled: true
path: /v3/api-docs
swagger-ui:
enabled: true
path: /swagger-ui.html
tags-sorter: alpha
operations-sorter: alpha

View File

@ -0,0 +1,55 @@
# ==============================================================
# Nacos 共享配置 - redis-config.yaml
# Redis 相关配置,所有微服务共享
# ==============================================================
spring:
redis:
# 连接配置(可通过环境变量覆盖)
host: ${REDIS_HOST:localhost}
port: ${REDIS_PORT:6379}
password: ${REDIS_PASSWORD:redis123456}
database: ${REDIS_DATABASE:0}
timeout: ${REDIS_TIMEOUT:3000ms}
lettuce:
pool:
max-active: ${REDIS_POOL_MAX_ACTIVE:20}
max-idle: ${REDIS_POOL_MAX_IDLE:10}
min-idle: ${REDIS_POOL_MIN_IDLE:5}
max-wait: ${REDIS_POOL_MAX_WAIT:1000ms}
shutdown-timeout: ${REDIS_SHUTDOWN_TIMEOUT:100ms}
# Redis Cluster 配置(可选)
# cluster:
# nodes: 127.0.0.1:7001,127.0.0.1:7002,127.0.0.1:7003
# redirect: REDIRECTION_ENABLED
# max-redirects: 3
# Redis Sentinel 配置(可选)
# sentinel:
# master: mymaster
# nodes: 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
# password: redis123456
# ==============================================================
# Redis 缓存配置
# ==============================================================
cache:
redis:
# 缓存过期时间默认1小时
time-to-live: ${CACHE_TTL:3600000}
# 使用UTC时间
use-time-prefix: true
# 缓存key前缀
key-prefix: "${spring.application.name}:"
# 缓存null值防止缓存穿透
cache-null-values: true
# ==============================================================
# Session 配置使用Redis存储
# ==============================================================
session:
store-type: redis
timeout: ${SESSION_TIMEOUT:1800}s
redis:
namespace: ${spring.application.name}:session

162
nacos/init/mysql-schema.sql Normal file
View File

@ -0,0 +1,162 @@
-- Nacos MySQL Schema for Config and Service Registration Persistence
-- Nacos版本: 2.2.x
-- 数据库名: nacos_config
CREATE DATABASE IF NOT EXISTS nacos_config CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
USE nacos_config;
-- ---------------------------------------------------------------
-- Config Info 配置信息表
-- ---------------------------------------------------------------
CREATE TABLE IF NOT EXISTS config_info (
id BIGINT NOT NULL AUTO_INCREMENT COMMENT 'id',
data_id VARCHAR(255) NOT NULL COMMENT 'data_id',
group_id VARCHAR(128) NOT NULL COMMENT 'group_id',
content LONGTEXT NOT NULL COMMENT 'content',
md5 VARCHAR(32) DEFAULT NULL COMMENT 'md5',
gmt_create DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
gmt_modified DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
src_user TEXT COMMENT '源用户',
src_ip VARCHAR(50) DEFAULT NULL COMMENT '源IP',
app_name VARCHAR(128) DEFAULT NULL COMMENT '应用名',
tenant_id VARCHAR(128) DEFAULT '' COMMENT '租户ID',
c_desc VARCHAR(256) DEFAULT NULL COMMENT '描述',
c_use VARCHAR(64) DEFAULT NULL COMMENT '使用方式',
effect VARCHAR(64) DEFAULT NULL COMMENT '生效范围',
type VARCHAR(64) DEFAULT NULL COMMENT '配置类型',
c_schema TEXT COMMENT 'schema',
encrypted_data_key TEXT DEFAULT NULL COMMENT 'encrypted_data_key',
PRIMARY KEY (id),
UNIQUE KEY uk_configinfo_datagrouptenant (data_id, group_id, tenant_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='config_info';
-- ---------------------------------------------------------------
-- Config Tags Relation 配置标签关系表
-- ---------------------------------------------------------------
CREATE TABLE IF NOT EXISTS config_tags_relation (
id BIGINT NOT NULL COMMENT 'id',
tag_name VARCHAR(128) NOT NULL COMMENT 'tag_name',
tag_type VARCHAR(64) DEFAULT NULL COMMENT 'tag_type',
data_id VARCHAR(255) NOT NULL COMMENT 'data_id',
group_id VARCHAR(128) NOT NULL COMMENT 'group_id',
tenant_id VARCHAR(128) DEFAULT '' COMMENT 'tenant_id',
nid BIGINT NOT NULL AUTO_INCREMENT,
PRIMARY KEY (nid),
UNIQUE KEY uk_configtagrelation_configidtag (id, tag_name, tag_type),
KEY idx_tenant_id (tenant_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='config_tag_relation';
-- ---------------------------------------------------------------
-- Group Capacity 表
-- ---------------------------------------------------------------
CREATE TABLE IF NOT EXISTS group_capacity (
id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT COMMENT '主键ID',
group_id VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'Group ID空字符表示所有分组',
quota INT UNSIGNED NOT NULL DEFAULT '0' COMMENT '配额0表示不限制',
usage INT UNSIGNED NOT NULL DEFAULT '0' COMMENT '使用量',
max_size INT UNSIGNED NOT NULL DEFAULT '0' COMMENT '单个配置大小上限单位字节0表示不限制',
max_aggr_count INT UNSIGNED NOT NULL DEFAULT '0' COMMENT '聚合配置子配置最大个数0表示不限制',
max_aggr_size INT UNSIGNED NOT NULL DEFAULT '0' COMMENT '单个聚合配置的最大值单位字节0表示不限制',
max_history_count INT UNSIGNED NOT NULL DEFAULT '0' COMMENT '最大变更历史数',
gmt_create DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
gmt_modified DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (id),
UNIQUE KEY uk_group_id (group_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='group_capacity';
-- ---------------------------------------------------------------
-- Tenant Capacity 表
-- ---------------------------------------------------------------
CREATE TABLE IF NOT EXISTS tenant_capacity (
id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT COMMENT '主键ID',
tenant_id VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'Tenant ID',
quota INT UNSIGNED NOT NULL DEFAULT '0' COMMENT '配额0表示不限制',
usage INT UNSIGNED NOT NULL DEFAULT '0' COMMENT '使用量',
max_size INT UNSIGNED NOT NULL DEFAULT '0' COMMENT '单个配置大小上限单位字节0表示不限制',
max_aggr_count INT UNSIGNED NOT NULL DEFAULT '0' COMMENT '聚合配置子配置最大个数0表示不限制',
max_aggr_size INT UNSIGNED NOT NULL DEFAULT '0' COMMENT '单个聚合配置的最大值单位字节0表示不限制',
max_history_count INT UNSIGNED NOT NULL DEFAULT '0' COMMENT '最大变更历史数',
gmt_create DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
gmt_modified DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (id),
UNIQUE KEY uk_tenant_id (tenant_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='tenant_capacity';
-- ---------------------------------------------------------------
-- Tenant Info 租户信息表
-- ---------------------------------------------------------------
CREATE TABLE IF NOT EXISTS tenant_info (
id BIGINT NOT NULL AUTO_INCREMENT COMMENT 'id',
kp VARCHAR(128) NOT NULL COMMENT 'kp',
tenant_id VARCHAR(128) DEFAULT '' COMMENT 'tenant_id',
tenant_name VARCHAR(128) DEFAULT '' COMMENT 'tenant_name',
tenant_desc VARCHAR(256) DEFAULT NULL COMMENT 'tenant_desc',
create_source VARCHAR(32) DEFAULT NULL COMMENT 'create_source',
gmt_create BIGINT NOT NULL COMMENT '创建时间',
gmt_modified BIGINT NOT NULL COMMENT '修改时间',
PRIMARY KEY (id),
UNIQUE KEY uk_tenant_info_kptenantid (kp, tenant_id),
KEY idx_tenant_id (tenant_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='tenant_info';
-- ---------------------------------------------------------------
-- Users 用户表
-- ---------------------------------------------------------------
CREATE TABLE IF NOT EXISTS users (
username VARCHAR(50) NOT NULL PRIMARY KEY COMMENT 'username',
password VARCHAR(500) NOT NULL COMMENT 'password',
enabled BOOLEAN NOT NULL COMMENT 'enabled'
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='users';
-- ---------------------------------------------------------------
-- Roles 角色表
-- ---------------------------------------------------------------
CREATE TABLE IF NOT EXISTS roles (
username VARCHAR(50) NOT NULL COMMENT 'username',
role VARCHAR(50) NOT NULL COMMENT 'role',
UNIQUE INDEX idx_user_role (username ASC, role ASC) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='roles';
-- ---------------------------------------------------------------
-- Permissions 权限表
-- ---------------------------------------------------------------
CREATE TABLE IF NOT EXISTS permissions (
role VARCHAR(50) NOT NULL COMMENT 'role',
resource VARCHAR(255) NOT NULL COMMENT 'resource',
action VARCHAR(8) NOT NULL COMMENT 'action',
UNIQUE INDEX uk_role_permission (role, resource, action) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='permissions';
-- ---------------------------------------------------------------
-- Initialize Default Users (Nacos Console)
-- Nacos 2.2.x 默认用户名/密码: nacos/nacos
-- 生产环境务必修改!
-- ---------------------------------------------------------------
INSERT INTO users (username, password, enabled) VALUES
('nacos', '{BCrypt}$2a$10$YSQRzLJWk1OLaNOX4vP39eu2tY1LQLeLPxKnF5Vfl2h2JHKtVVD.K', TRUE);
INSERT INTO roles (username, role) VALUES
('nacos', 'ROLE_ADMIN');
-- ---------------------------------------------------------------
-- Config History 配置变更历史表
-- ---------------------------------------------------------------
CREATE TABLE IF NOT EXISTS config_history (
id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
app_name VARCHAR(128) DEFAULT NULL COMMENT 'appName',
bean_name VARCHAR(128) DEFAULT NULL COMMENT 'beanName',
change_data LONGTEXT DEFAULT NULL COMMENT 'changeData',
md5 VARCHAR(32) DEFAULT NULL,
gmt_create DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
gmt_modified DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
op_type CHAR(10) DEFAULT NULL COMMENT 'opType',
data_id VARCHAR(255) DEFAULT NULL COMMENT 'data_id',
group_id VARCHAR(128) DEFAULT NULL COMMENT 'group_id',
tenant_id VARCHAR(128) DEFAULT '' COMMENT 'tenant_id',
type VARCHAR(64) DEFAULT NULL COMMENT 'type',
last_sync_time DATETIME DEFAULT NULL COMMENT 'last_sync_time',
INDEX idx_gmt_create (gmt_create),
INDEX idx_gmt_modified (gmt_modified),
INDEX idx_data_id (data_id),
INDEX idx_group_id (group_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='config_history';

237
nacos/scripts/startup.sh Executable file
View File

@ -0,0 +1,237 @@
#!/bin/bash
# ==============================================================
# Nacos 启动脚本
# 支持单机/集群模式
# ==============================================================
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(dirname "$SCRIPT_DIR")"
# 颜色定义
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# 默认配置
MODE=${MODE:-standalone}
NACOS_VERSION=${NACOS_VERSION:-v2.2.3}
NACOS_SERVER_ADDR=${NACOS_SERVER_ADDR:-127.0.0.1:8848}
NACOS_NAMESPACE=${NACOS_NAMESPACE:-public}
NACOS_GROUP=${NACOS_GROUP:-DEFAULT_GROUP}
MYSQL_HOST=${MYSQL_HOST:-localhost}
MYSQL_PORT=${MYSQL_PORT:-3308}
echo_color() {
echo -e "${2}[$1]${NC} $3"
}
# 显示帮助
show_help() {
cat << EOF
用法: $0 [OPTIONS]
选项:
--mode MODE 运行模式: standalone | cluster (默认: standalone)
--version VERSION Nacos版本 (默认: v2.2.3)
--nacos-addr ADDR Nacos服务器地址 (默认: 127.0.0.1:8848)
--namespace NAMESPACE 命名空间 (默认: public)
--group GROUP 分组 (默认: DEFAULT_GROUP)
--mysql-host HOST MySQL主机 (默认: localhost)
--mysql-port PORT MySQL端口 (默认: 3308)
-h, --help 显示帮助
示例:
# 启动单机模式
$0 --mode standalone
# 启动集群模式
$0 --mode cluster --nacos-addr 192.168.1.100:8848
# 指定命名空间
$0 --mode standalone --namespace dev --group DEV_GROUP
EOF
}
# 解析参数
while [[ $# -gt 0 ]]; do
case $1 in
--mode)
MODE="$2"
shift 2
;;
--version)
NACOS_VERSION="$2"
shift 2
;;
--nacos-addr)
NACOS_SERVER_ADDR="$2"
shift 2
;;
--namespace)
NACOS_NAMESPACE="$2"
shift 2
;;
--group)
NACOS_GROUP="$2"
shift 2
;;
--mysql-host)
MYSQL_HOST="$2"
shift 2
;;
--mysql-port)
MYSQL_PORT="$2"
shift 2
;;
-h|--help)
show_help
exit 0
;;
*)
echo_color "ERROR" "$RED" "未知参数: $1"
show_help
exit 1
;;
esac
done
# 检查Docker和Docker Compose
check_requirements() {
if ! command -v docker &> /dev/null; then
echo_color "ERROR" "$RED" "Docker未安装"
exit 1
fi
if ! command -v docker-compose &> /dev/null && ! docker compose version &> /dev/null; then
echo_color "ERROR" "$RED" "Docker Compose未安装"
exit 1
fi
echo_color "OK" "$GREEN" "环境检查通过"
}
# 启动单机模式
start_standalone() {
echo_color "INFO" "$YELLOW" "正在启动 Nacos 单机模式..."
echo_color "INFO" "$YELLOW" "Nacos版本: $NACOS_VERSION"
echo_color "INFO" "$YELLOW" "服务器地址: $NACOS_SERVER_ADDR"
echo_color "INFO" "$YELLOW" "命名空间: $NACOS_NAMESPACE"
cd "$PROJECT_DIR"
# 使用docker-compose
if docker compose -f docker-compose.standalone.yml up -d 2>/dev/null; then
echo_color "OK" "$GREEN" "Docker Compose启动成功"
else
echo_color "INFO" "$YELLOW" "尝试使用docker-compose命令..."
docker-compose -f docker-compose.standalone.yml up -d
fi
echo ""
echo_color "OK" "$GREEN" "=========================================="
echo_color "OK" "$GREEN" "Nacos 单机模式已启动"
echo_color "OK" "$GREEN" "=========================================="
echo "控制台: http://localhost:8848/nacos"
echo "用户名: nacos"
echo "密码: nacos123456"
echo "API地址: http://localhost:8848/nacos/v1/"
echo ""
}
# 启动集群模式
start_cluster() {
echo_color "INFO" "$YELLOW" "正在启动 Nacos 集群模式..."
echo_color "INFO" "$YELLOW" "Nacos版本: $NACOS_VERSION"
echo_color "INFO" "$YELLOW" "服务器地址: $NACOS_SERVER_ADDR"
echo_color "INFO" "$YELLOW" "节点数: 3"
cd "$PROJECT_DIR"
if docker compose -f docker-compose.cluster.yml up -d 2>/dev/null; then
echo_color "OK" "$GREEN" "Docker Compose启动成功"
else
echo_color "INFO" "$YELLOW" "尝试使用docker-compose命令..."
docker-compose -f docker-compose.cluster.yml up -d
fi
echo ""
echo_color "OK" "$GREEN" "=========================================="
echo_color "OK" "$GREEN" "Nacos 集群模式已启动"
echo_color "OK" "$GREEN" "=========================================="
echo "集群节点: nacos-server-1:8848"
echo " nacos-server-2:8848"
echo " nacos-server-3:8848"
echo "负载均衡: localhost:8848 (通过Nginx)"
echo "控制台: http://localhost:8848/nacos"
echo "用户名: nacos"
echo "密码: nacos123456"
echo ""
}
# 停止服务
stop() {
echo_color "INFO" "$YELLOW" "正在停止 Nacos..."
cd "$PROJECT_DIR"
docker compose -f docker-compose.standalone.yml down 2>/dev/null || \
docker-compose -f docker-compose.standalone.yml down 2>/dev/null || true
docker compose -f docker-compose.cluster.yml down 2>/dev/null || \
docker-compose -f docker-compose.cluster.yml down 2>/dev/null || true
echo_color "OK" "$GREEN" "Nacos已停止"
}
# 查看日志
logs() {
cd "$PROJECT_DIR"
case $MODE in
standalone)
docker compose -f docker-compose.standalone.yml logs -f nacos-server 2>/dev/null || \
docker-compose -f docker-compose.standalone.yml logs -f nacos-server
;;
cluster)
docker compose -f docker-compose.cluster.yml logs -f 2>/dev/null || \
docker-compose -f docker-compose.cluster.yml logs -f
;;
esac
}
# 主逻辑
main() {
check_requirements
case $MODE in
standalone)
start_standalone
;;
cluster)
start_cluster
;;
stop)
stop
;;
logs)
logs
;;
*)
echo_color "ERROR" "$RED" "无效模式: $MODE"
show_help
exit 1
;;
esac
}
# 根据参数决定执行什么
if [[ "$1" == "stop" ]] || [[ "$1" == "down" ]]; then
stop
elif [[ "$1" == "logs" ]]; then
logs
else
main
fi

230
pom.xml Normal file
View File

@ -0,0 +1,230 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.erp</groupId>
<artifactId>erp-java-backend</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>pom</packaging>
<name>ERP Java Backend</name>
<description>ERP系统Java微服务后端</description>
<properties>
<java.version>17</java.version>
<maven.compiler.source>17</maven.compiler.source>
<maven.compiler.target>17</maven.compiler.target>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<!-- Spring Boot -->
<spring-boot.version>3.2.4</spring-boot.version>
<spring-cloud.version>2023.0.1</spring-cloud.version>
<spring-cloud-alibaba.version>2023.0.3.2</spring-cloud-alibaba.version>
<!-- 其他依赖版本 -->
<mybatis-plus.version>3.5.6</mybatis-plus.version>
<mysql-connector.version>8.0.33</mysql-connector.version>
<druid.version>1.2.20</druid.version>
<lombok.version>1.18.30</lombok.version>
<mapstruct.version>1.5.5.Final</mapstruct.version>
<jjwt.version>0.12.3</jjwt.version>
<!-- OpenAPI & Monitoring -->
<springdoc.version>2.3.0</springdoc.version>
<knife4j.version>4.3.0</knife4j.version>
<micrometer.version>1.12.5</micrometer.version>
</properties>
<!-- 模块定义 -->
<modules>
<!-- 公共模块 -->
<module>common/common-core</module>
<module>common/common-web</module>
<module>common/common-config</module>
<!-- 业务服务 -->
<module>services/user-service</module>
<module>services/product-service</module>
<module>services/order-service</module>
<module>services/inventory-service</module>
<module>services/file-service</module>
<module>services/tenant-service</module>
<module>services/permission-service</module>
<module>services/report-service</module>
<module>services/dashboard-service</module>
<module>services/system-tool-service</module>
<module>services/reconciliation-service</module>
<module>services/purchase-service</module>
<module>services/customer-service</module>
<module>services/approval-flow-service</module>
<module>services/category-service</module>
<module>services/ai-service</module>
<module>services/sku-match-service</module>
<module>services/supplier-service</module>
<module>services/invoice-service</module>
<module>services/waybill-service</module>
<module>services/logistics-service</module>
<module>services/data-import-export-service</module>
</modules>
<!-- 依赖管理 -->
<dependencyManagement>
<dependencies>
<!-- Spring Boot Starter -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-dependencies</artifactId>
<version>${spring-boot.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<!-- Spring Cloud -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<!-- Spring Cloud Alibaba -->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-alibaba-dependencies</artifactId>
<version>${spring-cloud-alibaba.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<!-- MyBatis Plus -->
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-spring-boot3-starter</artifactId>
<version>${mybatis-plus.version}</version>
</dependency>
<!-- MySQL -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>${mysql-connector.version}</version>
</dependency>
<!-- Druid -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid-spring-boot-starter</artifactId>
<version>${druid.version}</version>
</dependency>
<!-- Lombok -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>${lombok.version}</version>
<optional>true</optional>
</dependency>
<!-- MapStruct -->
<dependency>
<groupId>org.mapstruct</groupId>
<artifactId>mapstruct</artifactId>
<version>${mapstruct.version}</version>
</dependency>
<!-- JWT -->
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-api</artifactId>
<version>${jjwt.version}</version>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-impl</artifactId>
<version>${jjwt.version}</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-jackson</artifactId>
<version>${jjwt.version}</version>
<scope>runtime</scope>
</dependency>
<!-- OpenAPI 3 / Swagger UI -->
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
<version>${springdoc.version}</version>
</dependency>
<!-- Knife4j Enhanced Documentation -->
<dependency>
<groupId>com.github.xiaoymin</groupId>
<artifactId>knife4j-openapi3-spring-boot-starter</artifactId>
<version>${knife4j.version}</version>
</dependency>
<!-- Micrometer Prometheus Metrics -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
<version>${micrometer.version}</version>
</dependency>
<!-- Distributed Tracing with Sleuth + Zipkin -->
<!-- Removed: spring-cloud-starter-sleuth and spring-cloud-sleuth-zipkin are not available in Spring Cloud 2023.0.0+ -->
<!-- Use micrometer-tracing-bridge instead for distributed tracing -->
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>${spring-boot.version}</version>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.11.0</version>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
<annotationProcessorPaths>
<path>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>${lombok.version}</version>
</path>
<path>
<groupId>org.mapstruct</groupId>
<artifactId>mapstruct-processor</artifactId>
<version>${mapstruct.version}</version>
</path>
</annotationProcessorPaths>
</configuration>
</plugin>
</plugins>
</build>
</project>

223
pom.xml.bak Normal file
View File

@ -0,0 +1,223 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.erp</groupId>
<artifactId>erp-java-backend</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>pom</packaging>
<name>ERP Java Backend</name>
<description>ERP系统Java微服务后端</description>
<properties>
<java.version>17</java.version>
<maven.compiler.source>17</maven.compiler.source>
<maven.compiler.target>17</maven.compiler.target>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<!-- Spring Boot -->
<spring-boot.version>3.2.0</spring-boot.version>
<spring-cloud.version>2023.0.0</spring-cloud.version>
<spring-cloud-alibaba.version>2023.0.1.0</spring-cloud-alibaba.version>
<!-- 其他依赖版本 -->
<mybatis-plus.version>3.5.5</mybatis-plus.version>
<mysql-connector.version>8.0.33</mysql-connector.version>
<druid.version>1.2.20</druid.version>
<lombok.version>1.18.30</lombok.version>
<mapstruct.version>1.5.5.Final</mapstruct.version>
<jjwt.version>0.12.3</jjwt.version>
<!-- OpenAPI & Monitoring -->
<springdoc.version>2.3.0</springdoc.version>
<knife4j.version>4.3.0</knife4j.version>
<micrometer.version>1.12.5</micrometer.version>
</properties>
<!-- 模块定义 -->
<modules>
<!-- 公共模块 -->
<module>common/common-core</module>
<module>common/common-web</module>
<module>common/common-config</module>
<!-- 业务服务 -->
<module>services/user-service</module>
<module>services/product-service</module>
<module>services/order-service</module>
<module>services/inventory-service</module>
<module>services/file-service</module>
<module>services/tenant-service</module>
<module>services/permission-service</module>
<module>services/report-service</module>
<module>services/dashboard-service</module>
<module>services/system-tool-service</module>
<module>services/reconciliation-service</module>
<module>services/purchase-service</module>
<module>services/customer-service</module>
<module>services/approval-flow-service</module>
<module>services/category-service</module>
<module>services/ai-service</module>
<module>services/sku-match-service</module>
<module>services/supplier-service</module>
<module>services/invoice-service</module>
<module>services/waybill-service</module>
<module>services/logistics-service</module>
<module>services/data-import-export-service</module>
</modules>
<!-- 依赖管理 -->
<dependencyManagement>
<dependencies>
<!-- Spring Boot Starter -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-dependencies</artifactId>
<version>${spring-boot.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<!-- Spring Cloud -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<!-- Spring Cloud Alibaba -->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-alibaba-dependencies</artifactId>
<version>${spring-cloud-alibaba.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<!-- MyBatis Plus -->
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-boot-starter</artifactId>
<version>${mybatis-plus.version}</version>
</dependency>
<!-- MySQL -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>${mysql-connector.version}</version>
</dependency>
<!-- Druid -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid-spring-boot-starter</artifactId>
<version>${druid.version}</version>
</dependency>
<!-- Lombok -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>${lombok.version}</version>
<optional>true</optional>
</dependency>
<!-- MapStruct -->
<dependency>
<groupId>org.mapstruct</groupId>
<artifactId>mapstruct</artifactId>
<version>${mapstruct.version}</version>
</dependency>
<!-- JWT -->
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-api</artifactId>
<version>${jjwt.version}</version>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-impl</artifactId>
<version>${jjwt.version}</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-jackson</artifactId>
<version>${jjwt.version}</version>
<scope>runtime</scope>
</dependency>
<!-- OpenAPI 3 / Swagger UI -->
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
<version>${springdoc.version}</version>
</dependency>
<!-- Knife4j Enhanced Documentation -->
<dependency>
<groupId>com.github.xiaoymin</groupId>
<artifactId>knife4j-openapi3-spring-boot-starter</artifactId>
<version>${knife4j.version}</version>
</dependency>
<!-- Micrometer Prometheus Metrics -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
<version>${micrometer.version}</version>
</dependency>
<!-- Distributed Tracing with Sleuth + Zipkin -->
<!-- Removed: spring-cloud-starter-sleuth and spring-cloud-sleuth-zipkin are not available in Spring Cloud 2023.0.0+ -->
<!-- Use micrometer-tracing-bridge instead for distributed tracing -->
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>${spring-boot.version}</version>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.11.0</version>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
<annotationProcessorPaths>
<path>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>${lombok.version}</version>
</path>
<path>
<groupId>org.mapstruct</groupId>
<artifactId>mapstruct-processor</artifactId>
<version>${mapstruct.version}</version>
</path>
</annotationProcessorPaths>
</configuration>
</plugin>
</plugins>
</build>
</project>

42
project-structure.md Normal file
View File

@ -0,0 +1,42 @@
# 项目结构
```
erp-java-backend/
├── README.md
├── project-structure.md
├── docker-compose.yml # 开发环境Docker配置
├── scripts/ # 部署脚本
├── docs/ # 文档
│ ├── api-docs/ # API文档
│ ├── database/ # 数据库设计
│ └── deployment/ # 部署文档
├── infrastructure/ # 基础设施
│ ├── nacos/ # Nacos配置
│ ├── mysql/ # 数据库脚本
│ ├── redis/ # Redis配置
│ └── rocketmq/ # RocketMQ配置
├── common/ # 公共模块
│ ├── common-core/ # 核心工具类
│ ├── common-web/ # Web相关
│ ├── common-mybatis/ # MyBatis配置
│ └── common-redis/ # Redis配置
├── services/ # 业务服务
│ ├── api-gateway/ # API网关
│ ├── user-service/ # 用户服务
│ ├── product-service/ # 商品服务
│ ├── order-service/ # 订单服务
│ ├── inventory-service/ # 库存服务
│ ├── finance-service/ # 财务服务
│ ├── admin-service/ # 总控服务
│ ├── file-service/ # 文件服务
│ └── notification-service/ # 通知服务
└── pom.xml # 父级Maven配置
```
## 开发环境要求
- JDK 17+
- Maven 3.8+
- Docker 20.10+
- MySQL 8.0+
- Redis 7.0+
- Nacos 2.2+

272
rocketmq/README.md Normal file
View File

@ -0,0 +1,272 @@
# RocketMQ 消息队列服务配置
ERP系统的RocketMQ消息队列集群配置支持高可用、消息持久化和分布式事务。
## 架构概览
```
┌─────────────────────────────────────────────────────────────────┐
│ RocketMQ 集群 │
│ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ NameServer │ │ NameServer │ │
│ │ 9876 │◄────────────►│ 9877 │ │
│ └─────────────┘ └─────────────┘ │
│ ▲ │
│ │ │
│ ┌──────┴──────────────────────────────────────┐ │
│ │ │ │
│ │ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ Master-1 │◄──────►│ Slave-1 │ │ │
│ │ │ 10911 │ 同步 │ 10931 │ │ │
│ │ └─────────────┘ └─────────────┘ │ │
│ │ │ │
│ │ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ Master-2 │◄──────►│ Slave-2 │ │ │
│ │ │ 10921 │ 同步 │ 10941 │ │ │
│ │ └─────────────┘ └─────────────┘ │ │
│ └──────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## 目录结构
```
rocketmq/
├── docker-compose.yml # Docker Compose配置
├── start.sh # 启动脚本
├── stop.sh # 停止脚本
├── status.sh # 状态检查脚本
├── README.md # 本文档
├── config/
│ ├── namesrv-1.conf # NameServer-1配置
│ ├── namesrv-2.conf # NameServer-2配置
│ ├── broker-master-1.conf # Broker Master-1配置
│ ├── broker-master-2.conf # Broker Master-2配置
│ ├── broker-slave-1.conf # Broker Slave-1配置
│ ├── broker-slave-2.conf # Broker Slave-2配置
│ ├── setup-topics.sh # Topic创建脚本
│ ├── rocketmq-base.yml # 基础配置(各服务通用)
│ ├── order-service.yml # 订单服务配置
│ ├── inventory-service.yml # 库存服务配置
│ └── finance-service.yml # 财务服务配置
├── producer/
│ ├── BaseRocketMQProducer.java # 生产者基类
│ ├── OrderMessageProducer.java # 订单消息生产者
│ ├── InventoryMessageProducer.java # 库存消息生产者
│ └── FinanceMessageProducer.java # 财务消息生产者
├── consumer/
│ ├── BaseRocketMQConsumer.java # 消费者基类
│ ├── OrderMessageConsumer.java # 订单消息消费者
│ ├── InventoryMessageConsumer.java # 库存消息消费者
│ └── FinanceMessageConsumer.java # 财务消息消费者
└── monitoring/
├── docker-compose-monitoring.yml # 监控配置
├── prometheus.yml # Prometheus配置
├── rocketmq_rules.yml # 告警规则
└── grafana-dashboard.json # Grafana看板
```
## 快速开始
### 1. 启动集群
```bash
cd /root/.openclaw/workspace/erp-java-backend/rocketmq
chmod +x start.sh stop.sh status.sh config/setup-topics.sh
./start.sh
```
### 2. 检查状态
```bash
./status.sh
```
### 3. 访问Dashboard
- RocketMQ Dashboard: http://localhost:8080
- Prometheus: http://localhost:9090
- Grafana: http://localhost:3000 (admin/admin123)
### 4. 停止集群
```bash
./stop.sh
```
## 消息主题
| 主题名称 | 用途 | 队列数 |
|---------|------|--------|
| order-topic | 订单消息 | 8 |
| inventory-topic | 库存消息 | 8 |
| finance-topic | 财务消息 | 8 |
| notification-topic | 通知消息 | 4 |
| payment-topic | 支付消息 | 8 |
| warehouse-topic | 仓库消息 | 4 |
| customer-topic | 客户消息 | 4 |
| report-topic | 报表消息 | 2 |
## 微服务配置
### 环境变量
```bash
export ROCKETMQ_NAMESRV_ADDR=localhost:9876;localhost:9877
```
### 订单服务 (ERP Order Service)
```yaml
rocketmq:
namesrv-addr: ${ROCKETMQ_NAMESRV_ADDR}
producer:
group: erp-order-producer-group
consumer:
group: erp-order-consumer-group
```
### 库存服务 (ERP Inventory Service)
```yaml
rocketmq:
namesrv-addr: ${ROCKETMQ_NAMESRV_ADDR}
producer:
group: erp-inventory-producer-group
consumer:
group: erp-inventory-consumer-group
```
### 财务服务 (ERP Finance Service)
```yaml
rocketmq:
namesrv-addr: ${ROCKETMQ_NAMESRV_ADDR}
producer:
group: erp-finance-producer-group
consumer:
group: erp-finance-consumer-group
```
## 消息格式
### 订单消息 (OrderMessage)
```json
{
"orderId": "ORD202604040001",
"customerId": "CUST001",
"totalAmount": 999.00,
"status": "PAID",
"paymentMethod": "WECHAT",
"timestamp": 1712208000000
}
```
### 库存消息 (InventoryMessage)
```json
{
"skuId": "SKU001",
"productName": "商品名称",
"quantity": 10,
"currentStock": 100,
"threshold": 20,
"warehouseId": "WH001",
"orderId": "ORD202604040001",
"operationType": "DEDUCT",
"timestamp": 1712208000000
}
```
### 财务消息 (FinanceMessage)
```json
{
"invoiceId": "INV202604040001",
"orderId": "ORD202604040001",
"amount": 999.00,
"customerId": "CUST001",
"customerName": "客户名称",
"taxNumber": "91310000XXXXXXXX",
"invoiceType": "VAT_SPECIAL",
"status": "CREATED",
"timestamp": 1712208000000
}
```
## 高可用配置
### 主从同步
- Master节点处理读写请求
- Slave节点实时同步Master数据
- 主节点故障时自动切换到从节点
### 消息持久化
- 消息存储在本地文件系统
- 异步刷盘策略ASYNC_FLUSH
- 消息保留时间72小时
### 负载均衡
- 多个Consumer组成ConsumerGroup
- 消息在Group内负载均衡
- 支持顺序消息和事务消息
## 监控告警
### 监控指标
- TPS每秒发送/消费消息数)
- 消费堆积量
- 发送失败率
- 内存/磁盘使用率
- NameServer/Broker存活状态
### 告警规则
| 告警名称 | 条件 | 严重级别 |
|---------|------|---------|
| NameServerDown | NameServer不可用 | Critical |
| BrokerDown | Broker不可用 | Critical |
| ConsumerLag | 消费堆积 > 1000 | Warning |
| ProducerSendFailed | 发送失败率 > 10/s | Warning |
| MemoryUsageHigh | 内存使用率 > 85% | Warning |
| DiskUsageHigh | 磁盘使用率 > 80% | Warning |
## 常见问题
### 1. 启动失败
检查Docker是否运行
```bash
docker info
```
查看容器日志:
```bash
docker-compose logs -f
```
### 2. 消息发送失败
检查NameServer地址配置是否正确
```bash
echo $ROCKETMQ_NAMESRV_ADDR
```
### 3. 消费堆积
查看消费者状态:
```bash
docker exec rocketmq-broker-master-1 sh mqadmin consumerStatus -n namesrv-1:9876 -g erp-order-consumer-group
```
## 技术支持
- RocketMQ官方文档: https://rocketmq.apache.org/docs/
- Dashboard使用指南: https://github.com/apache/rocketmq-dashboard

View File

@ -0,0 +1,76 @@
# RocketMQ Broker Master-1 Configuration
# 集群名称
clusterName = rocketmq-cluster
brokerName = broker-master-1
# Broker角色ASYNC_MASTER异步主节点
brokerRole = ASYNC_MASTER
# 刷盘策略ASYNC_FLUSH异步刷盘
flushDiskType = ASYNC_FLUSH
# NameServer地址多个用分号分隔
namesrvAddr = namesrv-1:9876;namesrv-2:9876
# Broker监听端口
listenPort = 10911
#HA监听端口
haListenPort = 10909
#Broker IP需根据实际宿主机IP修改
brokerIP1 = 192.168.1.100
brokerIP2 = 192.168.1.100
# 是否允许Broker自动创建Topic
autoCreateTopicEnable = true
# 是否允许Broker自动创建订阅组
autoCreateSubscriptionGroup = true
# 消息存储路径
storePathRootDir = /home/rocketmq/store
# CommitLog存储路径
storePathCommitLog = /home/rocketmq/store/commitlog
# 消费队列存储路径
storePathConsumerQueue = /home/rocketmq/store/consumequeue
# 索引存储路径
storePathIndex = /home/rocketmq/store/index
# 检查点文件路径
storeCheckpoint = /home/rocketmq/store/checkpoint
# abort文件路径
abortFile = /home/rocketmq/store/abort
# 消息最大大小(字节)
maxMessageSize = 6291456
# 删除文件时间点默认凌晨4点
deleteWhen = 04
# 文件保留时间(小时)
fileReservedTime = 72
# 队列数
# 默认队列数(建议根据业务调整)
defaultTopicQueueNums = 8
# 发送消息超时时间(毫秒)
sendMessageTimeout = 3000
# 拉取消息超时时间(毫秒)
pullMessageTimeout = 30000
# 主题配置
# 订单主题
orderTopicNames = order-topic
# 库存主题
inventoryTopicNames = inventory-topic
# 财务主题
financeTopicNames = finance-topic
# 通知主题
notificationTopicNames = notification-topic

View File

@ -0,0 +1,28 @@
# RocketMQ Broker Master-2 Configuration
clusterName = rocketmq-cluster
brokerName = broker-master-2
brokerRole = ASYNC_MASTER
flushDiskType = ASYNC_FLUSH
namesrvAddr = namesrv-1:9876;namesrv-2:9876
listenPort = 10911
haListenPort = 10929
brokerIP1 = 192.168.1.100
brokerIP2 = 192.168.1.100
autoCreateTopicEnable = true
autoCreateSubscriptionGroup = true
storePathRootDir = /home/rocketmq/store
storePathCommitLog = /home/rocketmq/store/commitlog
storePathConsumerQueue = /home/rocketmq/store/consumequeue
storePathIndex = /home/rocketmq/store/index
storeCheckpoint = /home/rocketmq/store/checkpoint
abortFile = /home/rocketmq/store/abort
maxMessageSize = 6291456
deleteWhen = 04
fileReservedTime = 72
defaultTopicQueueNums = 8
sendMessageTimeout = 3000
pullMessageTimeout = 30000
orderTopicNames = order-topic
inventoryTopicNames = inventory-topic
financeTopicNames = finance-topic
notificationTopicNames = notification-topic

View File

@ -0,0 +1,25 @@
# RocketMQ Broker Slave-1 Configuration
clusterName = rocketmq-cluster
brokerName = broker-slave-1
# 从节点配置
brokerRole = SLAVE
flushDiskType = ASYNC_FLUSH
namesrvAddr = namesrv-1:9876;namesrv-2:9876
listenPort = 10911
haListenPort = 10939
brokerIP1 = 192.168.1.100
brokerIP2 = 192.168.1.100
autoCreateTopicEnable = true
autoCreateSubscriptionGroup = true
storePathRootDir = /home/rocketmq/store
storePathCommitLog = /home/rocketmq/store/commitlog
storePathConsumerQueue = /home/rocketmq/store/consumequeue
storePathIndex = /home/rocketmq/store/index
storeCheckpoint = /home/rocketmq/store/checkpoint
abortFile = /home/rocketmq/store/abort
maxMessageSize = 6291456
deleteWhen = 04
fileReservedTime = 72
defaultTopicQueueNums = 8
sendMessageTimeout = 3000
pullMessageTimeout = 30000

View File

@ -0,0 +1,24 @@
# RocketMQ Broker Slave-2 Configuration
clusterName = rocketmq-cluster
brokerName = broker-slave-2
brokerRole = SLAVE
flushDiskType = ASYNC_FLUSH
namesrvAddr = namesrv-1:9876;namesrv-2:9876
listenPort = 10911
haListenPort = 10949
brokerIP1 = 192.168.1.100
brokerIP2 = 192.168.1.100
autoCreateTopicEnable = true
autoCreateSubscriptionGroup = true
storePathRootDir = /home/rocketmq/store
storePathCommitLog = /home/rocketmq/store/commitlog
storePathConsumerQueue = /home/rocketmq/store/consumequeue
storePathIndex = /home/rocketmq/store/index
storeCheckpoint = /home/rocketmq/store/checkpoint
abortFile = /home/rocketmq/store/abort
maxMessageSize = 6291456
deleteWhen = 04
fileReservedTime = 72
defaultTopicQueueNums = 8
sendMessageTimeout = 3000
pullMessageTimeout = 30000

View File

@ -0,0 +1,77 @@
# Spring Cloud Stream RocketMQ Binder 配置
# 财务服务配置
spring:
application:
name: erp-finance-service
cloud:
stream:
rocketmq:
binder:
namesrv-addr: ${ROCKETMQ_NAMESRV_ADDR:localhost:9876}
producer:
enabled: true
enable-msg-trace: true
access-key: rocketmq
secret-key: rocketmq123
bindings:
# 财务输出通道
financeOutput:
destination: finance-topic
content-type: application/json
group: erp-finance-producer-group
binder: rocketmq
# 订单输入通道(接收订单支付消息)
orderInput:
destination: order-topic
content-type: application/json
group: erp-finance-consumer-group
consumer:
concurrency: 12
maxAttempts: 3
binder: rocketmq
# 支付输入通道(接收支付结果)
paymentInput:
destination: payment-topic
content-type: application/json
group: erp-finance-consumer-group
consumer:
concurrency: 12
maxAttempts: 3
binder: rocketmq
rocketmq:
namesrv-addr: ${ROCKETMQ_NAMESRV_ADDR:localhost:9876}
producer:
group: erp-finance-producer-group
retry-times-when-send-failed: 3
timeout: 3000
consumer:
group: erp-finance-consumer-group
consume-concurrency: 12
max-retry-times: 3
erp:
rocketmq:
topics:
order: order-topic
finance: finance-topic
payment: payment-topic
consumer:
groups:
finance-group: erp-finance-consumer-group
payment-group: erp-payment-consumer-group
server:
port: 8083
logging:
level:
org.apache.rocketmq: INFO
com.erp.mq: DEBUG

View File

@ -0,0 +1,74 @@
# Spring Cloud Stream RocketMQ Binder 配置
# 库存服务配置
spring:
application:
name: erp-inventory-service
cloud:
stream:
rocketmq:
binder:
namesrv-addr: ${ROCKETMQ_NAMESRV_ADDR:localhost:9876}
producer:
enabled: true
enable-msg-trace: true
access-key: rocketmq
secret-key: rocketmq123
bindings:
# 库存输出通道
inventoryOutput:
destination: inventory-topic
content-type: application/json
group: erp-inventory-producer-group
binder: rocketmq
# 订单输入通道(接收订单消息,触发库存扣减)
orderInput:
destination: order-topic
content-type: application/json
group: erp-inventory-consumer-group
consumer:
concurrency: 16
maxAttempts: 3
binder: rocketmq
# 通知输出通道(发送库存预警等通知)
notificationOutput:
destination: notification-topic
content-type: application/json
group: erp-inventory-producer-group
binder: rocketmq
rocketmq:
namesrv-addr: ${ROCKETMQ_NAMESRV_ADDR:localhost:9876}
producer:
group: erp-inventory-producer-group
retry-times-when-send-failed: 3
timeout: 3000
consumer:
group: erp-inventory-consumer-group
consume-concurrency: 16
max-retry-times: 3
erp:
rocketmq:
topics:
order: order-topic
inventory: inventory-topic
notification: notification-topic
consumer:
groups:
inventory-group: erp-inventory-consumer-group
notification-group: erp-notification-consumer-group
server:
port: 8082
logging:
level:
org.apache.rocketmq: INFO
com.erp.mq: DEBUG

View File

@ -0,0 +1,9 @@
# RocketMQ NameServer Configuration
# 集群名称
clusterName = rocketmq-cluster
# NameServer端口
namesrvAddr = namesrv-1:9876;namesrv-2:9876
# 是否启用快速失效关闭(推荐生产环境开启)
waitTimeoutForLockQueued = 1000 * 5

View File

@ -0,0 +1,9 @@
# RocketMQ NameServer Configuration
# 集群名称
clusterName = rocketmq-cluster
# NameServer端口
namesrvAddr = namesrv-1:9876;namesrv-2:9876
# 是否启用快速失效关闭
waitTimeoutForLockQueued = 1000 * 5

View File

@ -0,0 +1,86 @@
# Spring Cloud Stream RocketMQ Binder 配置
# 订单服务配置
spring:
application:
name: erp-order-service
cloud:
stream:
# RocketMQ Binder配置
rocketmq:
binder:
namesrv-addr: ${ROCKETMQ_NAMESRV_ADDR:localhost:9876}
# 生产者配置
producer:
enabled: true
enable-msg-trace: true
access-key: rocketmq
secret-key: rocketmq123
# 绑定器配置
bindings:
# 订单输出通道
orderOutput:
destination: order-topic
content-type: application/json
group: erp-order-producer-group
binder: rocketmq
# 库存输入通道(接收库存消息)
inventoryInput:
destination: inventory-topic
content-type: application/json
group: erp-order-consumer-group
consumer:
concurrency: 16
maxAttempts: 3
binder: rocketmq
# 支付输入通道(接收支付结果)
paymentInput:
destination: payment-topic
content-type: application/json
group: erp-order-consumer-group
consumer:
concurrency: 8
maxAttempts: 3
binder: rocketmq
# RocketMQ专用配置
rocketmq:
namesrv-addr: ${ROCKETMQ_NAMESRV_ADDR:localhost:9876}
producer:
group: erp-order-producer-group
retry-times-when-send-failed: 3
timeout: 3000
max-message-size: 6291456
consumer:
group: erp-order-consumer-group
consume-concurrency: 16
max-retry-times: 3
# ERP主题配置
erp:
rocketmq:
topics:
order: order-topic
inventory: inventory-topic
payment: payment-topic
consumer:
groups:
order-group: erp-order-consumer-group
inventory-group: erp-inventory-consumer-group
payment-group: erp-payment-consumer-group
# 服务端口
server:
port: 8081
# 日志配置
logging:
level:
org.apache.rocketmq: INFO
com.erp.mq: DEBUG

View File

@ -0,0 +1,106 @@
# RocketMQ Spring Cloud Stream Configuration
# 所有微服务共享的基础配置
rocketmq:
# NameServer地址
namesrv-addr: ${ROCKETMQ_NAMESRV_ADDR:localhost:9876}
# 客户端配置
client:
# 客户端类型JAVA | CPP | .NET | Python | Go | NodeJS
type: JAVA
# 客户端日志级别
log:
level: INFO
# 线程池配置
threadpool:
# 生产者线程池大小
producer:
core-size: 10
max-size: 50
queue-size: 10000
# 消费者线程池大小
consumer:
core-size: 20
max-size: 100
queue-size: 10000
# 生产者配置
producer:
# 发送失败重试次数
retry-times-when-send-failed: 3
# 异步发送失败重试次数
retry-times-when-send-async-failed: 2
# 最大消息大小
max-message-size: 6291456
# 压缩阈值
compress-msg-body-over-howmuch: 4096
# 发送超时时间(毫秒)
timeout: 3000
# 消费者配置
consumer:
# 消费模式CLUSTERING | BROADCASTING
mode: CLUSTERING
# 消息拉取策略
pull-batch-size: 32
# 消费并发度
consume-concurrency: 16
# 最大重试次数
max-retry-times: 16
# 重试间隔(毫秒)
retry-interval: 1000
# 高可用配置
ha:
# 主从同步时间间隔
ha-sync-timestamp-interval: 1000
# 主从同步检测间隔
ha-heartbeat-interval: 1000
# Spring Cloud Stream RocketMQ Binder
spring:
cloud:
stream:
rocketmq:
# Binder配置
binder:
# NameServer地址
namesrv-addr: ${ROCKETMQ_NAMESRV_ADDR:localhost:9876}
# 生产者配置
producer:
# 开启消息追踪
enable-msg-trace: true
# 接入点名称
access-key: rocketmq
secret-key: rocketmq123
# 默认绑定器配置
default-binder: rocketmq
# ERP系统消息主题配置
erp:
rocketmq:
topics:
order: order-topic
inventory: inventory-topic
finance: finance-topic
notification: notification-topic
payment: payment-topic
warehouse: warehouse-topic
customer: customer-topic
report: report-topic
consumer:
# 消费者组配置
groups:
order-group: erp-order-consumer-group
inventory-group: erp-inventory-consumer-group
finance-group: erp-finance-consumer-group
notification-group: erp-notification-consumer-group
payment-group: erp-payment-consumer-group
warehouse-group: erp-warehouse-consumer-group
customer-group: erp-customer-consumer-group
report-group: erp-report-consumer-group

39
rocketmq/config/setup-topics.sh Executable file
View File

@ -0,0 +1,39 @@
#!/bin/bash
# RocketMQ Topic Setup Script
# 用法: bash setup-topics.sh
# 主题列表
TOPICS=(
"order-topic:8" # 订单主题8个队列
"inventory-topic:8" # 库存主题
"finance-topic:8" # 财务主题
"notification-topic:4" # 通知主题
"payment-topic:8" # 支付主题
"warehouse-topic:4" # 仓库主题
"customer-topic:4" # 客户主题
"report-topic:2" # 报表主题
)
# NameServer地址
NAMESRV_ADDR=${NAMESRV_ADDR:-"localhost:9876"}
export NAMESRV_ADDR
# 循环创建主题
for topic_info in "${TOPICS[@]}"; do
topic=$(echo $topic_info | cut -d':' -f1)
queues=$(echo $topic_info | cut -d':' -f2)
echo "Creating topic: $topic with $queues queues..."
# 使用 mqadmin 创建主题
docker exec rocketmq-broker-master-1 sh mqadmin updateTopic \
-n $NAMESRV_ADDR \
-t $topic \
-c rocketmq-cluster \
-r $queues \
-w $queues
echo "Topic $topic created successfully!"
done
echo "All topics created!"

View File

@ -0,0 +1,138 @@
package com.erp.mq.consumer;
import lombok.extern.slf4j.Slf4j;
import org.apache.rocketmq.client.consumer.*;
import org.apache.rocketmq.client.exception.MQClientException;
import org.apache.rocketmq.common.consumer.ConsumeFromWhere;
import org.apache.rocketmq.common.message.MessageExt;
import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
import org.springframework.beans.factory.annotation.Value;
import java.nio.charset.StandardCharsets;
import java.util.List;
import java.util.concurrent.atomic.AtomicLong;
/**
* RocketMQ 消息消费者基类
* 提供通用的消息消费能力
*/
@Slf4j
public abstract class BaseRocketMQConsumer {
@Value("${rocketmq.namesrv-addr}")
protected String namesrvAddr;
protected DefaultMQPushConsumer consumer;
/**
* 获取Topic名称
*/
protected abstract String getTopic();
/**
* 获取消费者组
*/
protected abstract String getConsumerGroup();
/**
* 获取Tags*表示所有标签
*/
protected abstract String getTags();
/**
* 初始化消费者
*/
protected void init() throws MQClientException {
consumer = new DefaultMQPushConsumer(getConsumerGroup());
consumer.setNamesrvAddr(namesrvAddr);
// 设置消费模式集群消费
consumer.setMessageModel(MessageModel.CLUSTERING);
// 设置消费起始点
consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_FIRST_OFFSET);
// 设置并发消费线程数
consumer.setConsumeThreadMin(10);
consumer.setConsumeThreadMax(30);
// 设置拉取消息间隔
consumer.setPullInterval(0);
// 设置拉取消息数量
consumer.setPullBatchSize(32);
// 设置最大重试次数
consumer.setMaxReconsumeTimes(16);
// 订阅主题
consumer.subscribe(getTopic(), getTags());
// 注册消息监听器
consumer.registerMessageListener(this::consumeMessage);
// 启动消费者
consumer.start();
log.info("RocketMQ消费者启动成功 - Topic: {}, Group: {}", getTopic(), getConsumerGroup());
}
/**
* 消费消息处理
*/
protected ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> messageExtList, ConsumeConcurrentlyContext context) {
AtomicLong successCount = new AtomicLong(0);
AtomicLong failCount = new AtomicLong(0);
for (MessageExt messageExt : messageExtList) {
try {
String body = new String(messageExt.getBody(), StandardCharsets.UTF_8);
String msgId = messageExt.getMsgId();
String tags = messageExt.getTags();
String keys = messageExt.getKeys();
log.info("收到消息 - Topic: {}, MsgId: {}, Tags: {}, Keys: {}",
getTopic(), msgId, tags, keys);
// 调用子类处理逻辑
boolean success = handleMessage(body, messageExt);
if (success) {
successCount.incrementAndGet();
log.debug("消息处理成功 - MsgId: {}", msgId);
} else {
failCount.incrementAndGet();
log.warn("消息处理返回失败 - MsgId: {}", msgId);
return ConsumeConcurrentlyStatus.RECONSUME_LATER;
}
} catch (Exception e) {
failCount.incrementAndGet();
log.error("消息处理异常 - MsgId: {}", messageExt.getMsgId(), e);
return ConsumeConcurrentlyStatus.RECONSUME_LATER;
}
}
log.info("批次消息处理完成 - Topic: {}, Success: {}, Failed: {}",
getTopic(), successCount.get(), failCount.get());
return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
}
/**
* 处理消息 - 子类实现具体逻辑
* @param messageBody 消息体
* @param messageExt 消息扩展信息
* @return true表示处理成功false表示需要重试
*/
protected abstract boolean handleMessage(String messageBody, MessageExt messageExt);
/**
* 关闭消费者
*/
public void shutdown() {
if (consumer != null) {
consumer.shutdown();
log.info("RocketMQ消费者已关闭 - Topic: {}, Group: {}", getTopic(), getConsumerGroup());
}
}
}

Some files were not shown because too many files have changed in this diff Show More