-
-
Notifications
You must be signed in to change notification settings - Fork 717
Allow creating and monitoring run replication services with different settings #2055
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
""" WalkthroughThis change introduces new API endpoints and global singleton management for a runs replication service and TCP buffer monitor in a web application. It adds environment and configuration options for ClickHouse clients, including connection limits and compression. Logging enhancements and new test adjustments are also included. Changes
Sequence Diagram(s)sequenceDiagram
participant AdminUser
participant API
participant Auth
participant GlobalStore
participant RunsReplicationService
participant ClickHouse
participant Redis
AdminUser->>API: POST /admin.api.v1.runs-replication.create
API->>Auth: Validate personal access token
Auth-->>API: User info (isAdmin)
API->>GlobalStore: Check if service exists
alt Service does not exist
API->>API: Parse & validate payload
API->>RunsReplicationService: Create instance (with ClickHouse & Redis configs)
API->>GlobalStore: Store service singleton
API->>RunsReplicationService: Start service
RunsReplicationService->>ClickHouse: Connect
RunsReplicationService->>Redis: Connect
API-->>AdminUser: Success response
else Service exists
API-->>AdminUser: 400 error (already running)
end
sequenceDiagram
participant AdminUser
participant API
participant Auth
participant GlobalStore
participant Monitor
AdminUser->>API: POST /admin.api.v1.runs-replication.start-monitor
API->>Auth: Validate personal access token
Auth-->>API: User info (isAdmin)
API->>GlobalStore: Check if monitor exists
alt Monitor does not exist
API->>Monitor: Start TCP buffer monitor (interval)
API->>GlobalStore: Store monitor singleton
API-->>AdminUser: Success response
else Monitor exists
API-->>AdminUser: 400 error (already running)
end
AdminUser->>API: POST /admin.api.v1.runs-replication.stop-monitor
API->>Auth: Validate personal access token
Auth-->>API: User info (isAdmin)
API->>GlobalStore: Get monitor singleton
alt Monitor exists
API->>Monitor: Stop monitor (clear interval)
API->>GlobalStore: Unregister monitor
API-->>AdminUser: Success response
else Monitor does not exist
API-->>AdminUser: 400 error (not running)
end
Possibly related PRs
Suggested reviewers
Poem
""" Tip ⚡️ Free AI Code Reviews for VS Code, Cursor, Windsurf
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (7)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (5)
apps/webapp/app/services/runsReplicationInstance.server.ts (1)
19-19
: Replaced logger with console for logging.The structured logger has been replaced with direct console.log/console.error calls. While this might be simpler, it loses the benefits of structured logging like consistent formatting, log levels, and easier integration with log management systems.
Consider keeping the structured logger for better observability, especially in production:
-console.log("🗃️ Runs replication service not enabled"); +logger.info("🗃️ Runs replication service not enabled"); -console.log("🗃️ Runs replication service enabled"); +logger.info("🗃️ Runs replication service enabled"); -console.log("🗃️ Runs replication service started"); +logger.info("🗃️ Runs replication service started"); -console.error("🗃️ Runs replication service failed to start", { +logger.error("🗃️ Runs replication service failed to start", {Also applies to: 23-24, 71-71, 74-76
apps/webapp/app/services/monitorTcpBuffers.server.ts (2)
50-53
: Consider enhancing error handling with structured loggingCurrent error handling logs to console.error, which might not integrate well with the application's logging system. Consider using the logger for error messages as well.
- console.error("tcp-buffer-monitor error", err); + logger.error("tcp-buffer-monitor error", { error: err });
56-57
: Consider adding a type annotation for the return valueAdding a return type would improve code clarity and type safety.
-export function startTcpBufferMonitor(intervalMs = 5_000) { +export function startTcpBufferMonitor(intervalMs = 5_000): NodeJS.Timeout {apps/webapp/app/services/runsReplicationGlobal.server.ts (1)
14-36
: Well-structured global state management with typed accessorsThe implementation provides clear getter/setter/unregister functions with proper type safety.
However, consider adding null checks or default values when getting global instances to prevent potential runtime errors.
export function getRunsReplicationGlobal(): RunsReplicationService | undefined { return _global[GLOBAL_RUNS_REPLICATION_KEY]; } export function getTcpMonitorGlobal(): NodeJS.Timeout | undefined { return _global[GLOBAL_TCP_MONITOR_KEY]; }apps/webapp/app/routes/admin.api.v1.runs-replication.create.ts (1)
80-92
: Guard against missing mandatory environment variables
env.RUN_REPLICATION_CLICKHOUSE_URL
(and others likeDATABASE_URL
) are assumed to be present but not validated. A missing URL will produce a confusing runtime error deep inside the ClickHouse client.Consider asserting upfront:
if (!env.RUN_REPLICATION_CLICKHOUSE_URL) { throw new Error("RUN_REPLICATION_CLICKHOUSE_URL is not set"); }You can place these assertions in
env.server.ts
for centralised validation.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
apps/webapp/app/env.server.ts
(1 hunks)apps/webapp/app/routes/admin.api.v1.runs-replication.create.ts
(1 hunks)apps/webapp/app/routes/admin.api.v1.runs-replication.start-monitor.ts
(1 hunks)apps/webapp/app/routes/admin.api.v1.runs-replication.start.ts
(2 hunks)apps/webapp/app/routes/admin.api.v1.runs-replication.stop-monitor.ts
(1 hunks)apps/webapp/app/routes/admin.api.v1.runs-replication.stop.ts
(2 hunks)apps/webapp/app/routes/admin.api.v1.runs-replication.teardown.ts
(2 hunks)apps/webapp/app/services/monitorTcpBuffers.server.ts
(1 hunks)apps/webapp/app/services/runsReplicationGlobal.server.ts
(1 hunks)apps/webapp/app/services/runsReplicationInstance.server.ts
(3 hunks)apps/webapp/test/runsReplicationService.test.ts
(2 hunks)internal-packages/clickhouse/src/client/client.ts
(4 hunks)internal-packages/clickhouse/src/index.ts
(4 hunks)internal-packages/replication/src/client.ts
(0 hunks)
💤 Files with no reviewable changes (1)
- internal-packages/replication/src/client.ts
🧰 Additional context used
🧬 Code Graph Analysis (4)
apps/webapp/app/routes/admin.api.v1.runs-replication.teardown.ts (2)
apps/webapp/app/services/runsReplicationGlobal.server.ts (2)
getRunsReplicationGlobal
(14-16)unregisterRunsReplicationGlobal
(22-24)apps/webapp/app/services/runsReplicationInstance.server.ts (1)
runsReplicationInstance
(9-12)
apps/webapp/app/routes/admin.api.v1.runs-replication.stop.ts (2)
apps/webapp/app/services/runsReplicationGlobal.server.ts (1)
getRunsReplicationGlobal
(14-16)apps/webapp/app/services/runsReplicationInstance.server.ts (1)
runsReplicationInstance
(9-12)
apps/webapp/app/routes/admin.api.v1.runs-replication.start.ts (2)
apps/webapp/app/services/runsReplicationGlobal.server.ts (1)
getRunsReplicationGlobal
(14-16)apps/webapp/app/services/runsReplicationInstance.server.ts (1)
runsReplicationInstance
(9-12)
apps/webapp/app/services/runsReplicationGlobal.server.ts (1)
apps/webapp/app/services/runsReplicationService.server.ts (1)
RunsReplicationService
(62-652)
⏰ Context from checks skipped due to timeout of 90000ms (7)
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
- GitHub Check: typecheck / typecheck
- GitHub Check: units / 🧪 Unit Tests
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (25)
apps/webapp/app/env.server.ts (1)
774-774
: Added configuration for ClickHouse client connection pooling.This new environment variable
RUN_REPLICATION_MAX_OPEN_CONNECTIONS
properly configures the maximum number of open connections for the ClickHouse client in the runs replication service, with a sensible default of 10.apps/webapp/app/routes/admin.api.v1.runs-replication.start.ts (2)
4-4
: Added import for the global singleton management.Appropriate import for the new global runs replication service accessor.
30-36
: Enhanced service management with global singleton prioritization.The implementation now properly prioritizes the global runs replication service instance while maintaining backward compatibility with the existing singleton pattern. This approach ensures a smooth transition to the new global management system.
apps/webapp/app/routes/admin.api.v1.runs-replication.stop.ts (2)
4-4
: Added import for the global singleton management.Appropriate import for the new global runs replication service accessor.
30-36
: Enhanced service management with global singleton prioritization.The implementation now properly prioritizes the global runs replication service instance while maintaining backward compatibility with the existing singleton pattern. This approach ensures a smooth transition to the new global management system.
apps/webapp/app/routes/admin.api.v1.runs-replication.teardown.ts (2)
4-7
: Added imports for global singleton management.The code properly imports both the getter and the unregister function for the global runs replication service.
33-40
: Implemented proper teardown with global singleton cleanup.The implementation correctly:
- Prioritizes the global runs replication service instance
- Properly calls teardown on the service
- Unregisters the global service to prevent memory leaks and stale references
- Falls back to the existing singleton when necessary
This ensures complete cleanup of resources and prevents potential memory issues.
apps/webapp/test/runsReplicationService.test.ts (2)
21-23
: Add compression to improve ClickHouse client performance.This change adds request compression to the ClickHouse client, which is a good optimization for reducing network traffic, especially when dealing with large datasets in the replication service.
611-612
: Test duration reduced from 4 minutes to 1 minute.Reducing the long-running test duration from 4 minutes to 1 minute is a good optimization for the test suite's runtime while still verifying the service's ability to handle processing transactions over an extended period.
apps/webapp/app/services/runsReplicationInstance.server.ts (1)
33-36
: Enhanced ClickHouse client configuration.Adding compression and connection pool management to the ClickHouse client are excellent optimizations:
- Request compression will reduce network traffic and potentially improve throughput
- Configurable connection pooling via environment variables allows for tuning based on workload and infrastructure
apps/webapp/app/routes/admin.api.v1.runs-replication.stop-monitor.ts (1)
1-47
: New endpoint to stop the TCP buffer monitor.This endpoint follows good practices:
- Proper authentication and authorization checks
- Appropriate error handling
- Clean resource management with clearInterval
- Clear success/error responses
apps/webapp/app/routes/admin.api.v1.runs-replication.start-monitor.ts (2)
8-10
: Well-defined input validation schema.Using Zod for schema validation with appropriate min/max constraints is a good practice. The 1-60 second range for the monitoring interval is reasonable.
34-58
: Robust implementation of the start-monitor action.The implementation shows good practices:
- Try/catch with proper error handling
- Input validation using the schema
- Checking if a monitor is already running before starting a new one
- Clear error messages for different scenarios
internal-packages/clickhouse/src/index.ts (3)
21-25
: Feature enhancement: Added configuration options for ClickHouse clientThe new configuration options for compression and connection limits are well-defined as optional properties in the ClickhouseCommonConfig type.
67-68
: Properly forwarding new configuration options to clientNew configuration options are correctly passed to the ClickhouseClient when using a single URL configuration.
83-84
: Consistent implementation across reader/writer clientsConfiguration options are consistently applied to both reader and writer clients when using separate endpoints.
Also applies to: 94-95
apps/webapp/app/services/monitorTcpBuffers.server.ts (3)
6-10
: Well-documented purpose for the TCP buffer monitoring functionThe comments clearly explain the function's purpose and how the logs can be utilized.
14-18
: Efficiently reading system files with Promise.allGood use of concurrent file reading to optimize the sampling process.
26-39
:❓ Verification inconclusive
Verify system compatibility for Linux-specific implementation
This code reads from Linux-specific file paths and assumes a 4KB page size, which might not work in all environments.
🏁 Script executed:
#!/bin/bash # Check if the system files exist and are readable echo "Checking if the required system files exist:" [ -f "/proc/net/sockstat" ] && echo "✓ /proc/net/sockstat exists" || echo "✗ /proc/net/sockstat does not exist" [ -f "/proc/sys/net/core/wmem_max" ] && echo "✓ /proc/sys/net/core/wmem_max exists" || echo "✗ /proc/sys/net/core/wmem_max does not exist" [ -f "/proc/sys/net/ipv4/tcp_mem" ] && echo "✓ /proc/sys/net/ipv4/tcp_mem exists" || echo "✗ /proc/sys/net/ipv4/tcp_mem does not exist" # Check page size (should be 4KB on most systems) echo -e "\nVerifying page size:" getconf PAGE_SIZELength of output: 664
Verify Linux-specific dependencies and page size assumptions
This service reads from Linux-only files and hardcodes a 4 KB page size, which may not exist or hold true on all target systems. Please confirm on your deployment environments that:
/proc/net/sockstat
is present and readable/proc/sys/net/core/wmem_max
is present and readable/proc/sys/net/ipv4/tcp_mem
is present and readable- The system page size is 4096 bytes (or adjust calculations accordingly)
If any of these aren’t guaranteed, consider adding feature detection or configurable defaults.
apps/webapp/app/services/runsReplicationGlobal.server.ts (2)
3-4
: Good use of symbols for global state managementUsing Symbol.for creates unique global registry keys, reducing the risk of naming collisions.
11-12
: Cross-environment compatibility with globalThis fallbackGood practice to handle both browser and Node.js environments by falling back to global if globalThis is unavailable.
internal-packages/clickhouse/src/client/client.ts (4)
4-4
: Extended ClickhouseConfig with new configuration optionsThe addition of maxOpenConnections and compression options aligns with the changes in index.ts.
Also applies to: 34-39
56-57
: Properly configuring underlying ClickHouse clientNew options are correctly passed to the createClient function as compression and max_open_connections parameters.
63-65
: Added logging configuration with proper log level mappingGood implementation of log level configuration that integrates with the existing logging system.
303-320
: Well-structured log level conversion utilityThe helper function properly maps between application log levels and ClickHouse-specific log levels with appropriate fallbacks.
} catch (error) { | ||
return json({ error: error instanceof Error ? error.message : error }, { status: 400 }); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid leaking internal error messages to API consumers
Returning raw error.message
may expose stack traces, SQL, or infrastructure details. Log the full error server-side and send a generic message to the client.
- } catch (error) {
- return json({ error: error instanceof Error ? error.message : error }, { status: 400 });
+ } catch (error) {
+ console.error("Failed to create runs replication service", error);
+ return json({ error: "Failed to create runs replication service" }, { status: 400 });
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
} catch (error) { | |
return json({ error: error instanceof Error ? error.message : error }, { status: 400 }); | |
} | |
} catch (error) { | |
console.error("Failed to create runs replication service", error); | |
return json({ error: "Failed to create runs replication service" }, { status: 400 }); | |
} |
const service = createRunReplicationService(params); | ||
|
||
setRunsReplicationGlobal(service); | ||
|
||
await service.start(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move global registration after a successful start()
to avoid stale singletons
If service.start()
throws, setRunsReplicationGlobal(service)
has already stored a half-initialised object. Subsequent calls will see “service already exists” even though nothing is running.
- const service = createRunReplicationService(params);
-
- setRunsReplicationGlobal(service);
-
- await service.start();
+ const service = createRunReplicationService(params);
+
+ await service.start(); // ensure we’re fully up
+
+ setRunsReplicationGlobal(service);
Optionally wrap the start()
in its own try
/catch
and call service.stop()
on failure to guarantee cleanup.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const service = createRunReplicationService(params); | |
setRunsReplicationGlobal(service); | |
await service.start(); | |
const service = createRunReplicationService(params); | |
await service.start(); // ensure we’re fully up | |
setRunsReplicationGlobal(service); |
redisOptions: { | ||
keyPrefix: "runs-replication:", | ||
port: env.RUN_REPLICATION_REDIS_PORT ?? undefined, | ||
host: env.RUN_REPLICATION_REDIS_HOST ?? undefined, | ||
username: env.RUN_REPLICATION_REDIS_USERNAME ?? undefined, | ||
password: env.RUN_REPLICATION_REDIS_PASSWORD ?? undefined, | ||
enableAutoPipelining: true, | ||
...(env.RUN_REPLICATION_REDIS_TLS_DISABLED === "true" ? {} : { tls: {} }), | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Convert RUN_REPLICATION_REDIS_PORT
to a number to satisfy redis client typings
Environment variables are always strings; passing a string where a number is expected can cause connection failures in some redis client versions.
- port: env.RUN_REPLICATION_REDIS_PORT ?? undefined,
+ port:
+ env.RUN_REPLICATION_REDIS_PORT !== undefined
+ ? Number(env.RUN_REPLICATION_REDIS_PORT)
+ : undefined,
(Apply the same conversion wherever a numeric env var is forwarded.)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
redisOptions: { | |
keyPrefix: "runs-replication:", | |
port: env.RUN_REPLICATION_REDIS_PORT ?? undefined, | |
host: env.RUN_REPLICATION_REDIS_HOST ?? undefined, | |
username: env.RUN_REPLICATION_REDIS_USERNAME ?? undefined, | |
password: env.RUN_REPLICATION_REDIS_PASSWORD ?? undefined, | |
enableAutoPipelining: true, | |
...(env.RUN_REPLICATION_REDIS_TLS_DISABLED === "true" ? {} : { tls: {} }), | |
}, | |
redisOptions: { | |
keyPrefix: "runs-replication:", | |
port: | |
env.RUN_REPLICATION_REDIS_PORT !== undefined | |
? Number(env.RUN_REPLICATION_REDIS_PORT) | |
: undefined, | |
host: env.RUN_REPLICATION_REDIS_HOST ?? undefined, | |
username: env.RUN_REPLICATION_REDIS_USERNAME ?? undefined, | |
password: env.RUN_REPLICATION_REDIS_PASSWORD ?? undefined, | |
enableAutoPipelining: true, | |
...(env.RUN_REPLICATION_REDIS_TLS_DISABLED === "true" ? {} : { tls: {} }), | |
}, |
const CreateRunReplicationServiceParams = z.object({ | ||
name: z.string(), | ||
keepAliveEnabled: z.boolean(), | ||
keepAliveIdleSocketTtl: z.number(), | ||
maxOpenConnections: z.number(), | ||
maxFlushConcurrency: z.number(), | ||
flushIntervalMs: z.number(), | ||
flushBatchSize: z.number(), | ||
leaderLockTimeoutMs: z.number(), | ||
leaderLockExtendIntervalMs: z.number(), | ||
leaderLockAcquireAdditionalTimeMs: z.number(), | ||
leaderLockRetryIntervalMs: z.number(), | ||
ackIntervalSeconds: z.number(), | ||
waitForAsyncInsert: z.boolean(), | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Tighten parameter validation to reject nonsensical or dangerous input
Most numeric parameters (intervals, counts, batch sizes, TTL, etc.) must be positive integers.
z.number()
currently accepts NaN
, fractions, and negative values (as well as numbers far larger than what the service can realistically handle).
const CreateRunReplicationServiceParams = z.object({
name: z.string(),
- keepAliveEnabled: z.boolean(),
- keepAliveIdleSocketTtl: z.number(),
- maxOpenConnections: z.number(),
- maxFlushConcurrency: z.number(),
- flushIntervalMs: z.number(),
- flushBatchSize: z.number(),
- leaderLockTimeoutMs: z.number(),
- leaderLockExtendIntervalMs: z.number(),
- leaderLockAcquireAdditionalTimeMs: z.number(),
- leaderLockRetryIntervalMs: z.number(),
- ackIntervalSeconds: z.number(),
+ keepAliveEnabled: z.boolean(),
+ keepAliveIdleSocketTtl: z.number().int().positive(),
+ maxOpenConnections: z.number().int().positive(),
+ maxFlushConcurrency: z.number().int().positive(),
+ flushIntervalMs: z.number().int().positive(),
+ flushBatchSize: z.number().int().positive(),
+ leaderLockTimeoutMs: z.number().int().positive(),
+ leaderLockExtendIntervalMs: z.number().int().positive(),
+ leaderLockAcquireAdditionalTimeMs: z.number().int().nonnegative(),
+ leaderLockRetryIntervalMs: z.number().int().positive(),
+ ackIntervalSeconds: z.number().int().positive(),
waitForAsyncInsert: z.boolean(),
});
Rejecting invalid values early prevents runaway timers, mis-configured pools, and DoS vectors.
🏁 Script executed:
#!/bin/bash
set -e
echo "Searching for CreateRunReplicationServiceParams definition and usage:"
rg -n "CreateRunReplicationServiceParams" apps/webapp/app/routes/admin.api.v1.runs-replication.create.ts || true
echo
echo "Showing contents of the route file:"
sed -n '1,200p' apps/webapp/app/routes/admin.api.v1.runs-replication.create.ts || true
echo
echo "Checking for existing z.number().int() usage across the repo:"
rg -n "z\.number\(\)\.int" -A3 -B1 || true
Length of output: 12923
Enforce strict integer and positivity constraints on replication parameters
All numeric fields in CreateRunReplicationServiceParams
currently use z.number()
, which permits negatives, fractions, and NaN
. To prevent mis-configurations, resource exhaustion, and DoS scenarios, these should be constrained to integers—and where applicable, strictly positive or non-negative.
In apps/webapp/app/routes/admin.api.v1.runs-replication.create.ts, update the schema as follows:
const CreateRunReplicationServiceParams = z.object({
name: z.string(),
keepAliveEnabled: z.boolean(),
- keepAliveIdleSocketTtl: z.number(),
+ keepAliveIdleSocketTtl: z.number().int().nonnegative(), // ms; ≥ 0
- maxOpenConnections: z.number(),
+ maxOpenConnections: z.number().int().positive(), // ≥ 1
- maxFlushConcurrency: z.number(),
+ maxFlushConcurrency: z.number().int().positive(), // ≥ 1
- flushIntervalMs: z.number(),
+ flushIntervalMs: z.number().int().positive(), // ms; ≥ 1
- flushBatchSize: z.number(),
+ flushBatchSize: z.number().int().positive(), // ≥ 1
- leaderLockTimeoutMs: z.number(),
+ leaderLockTimeoutMs: z.number().int().positive(), // ms; ≥ 1
- leaderLockExtendIntervalMs: z.number(),
+ leaderLockExtendIntervalMs: z.number().int().positive(), // ms; ≥ 1
- leaderLockAcquireAdditionalTimeMs: z.number(),
+ leaderLockAcquireAdditionalTimeMs: z.number().int().nonnegative(), // ms; ≥ 0
- leaderLockRetryIntervalMs: z.number(),
+ leaderLockRetryIntervalMs: z.number().int().positive(), // ms; ≥ 1
- ackIntervalSeconds: z.number(),
+ ackIntervalSeconds: z.number().int().positive(), // s; ≥ 1
waitForAsyncInsert: z.boolean(),
});
This aligns with existing .int()
usage elsewhere (e.g., batchTrigger and ClickHouse schemas) and ensures invalid values are rejected at parse time.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const CreateRunReplicationServiceParams = z.object({ | |
name: z.string(), | |
keepAliveEnabled: z.boolean(), | |
keepAliveIdleSocketTtl: z.number(), | |
maxOpenConnections: z.number(), | |
maxFlushConcurrency: z.number(), | |
flushIntervalMs: z.number(), | |
flushBatchSize: z.number(), | |
leaderLockTimeoutMs: z.number(), | |
leaderLockExtendIntervalMs: z.number(), | |
leaderLockAcquireAdditionalTimeMs: z.number(), | |
leaderLockRetryIntervalMs: z.number(), | |
ackIntervalSeconds: z.number(), | |
waitForAsyncInsert: z.boolean(), | |
}); | |
const CreateRunReplicationServiceParams = z.object({ | |
name: z.string(), | |
keepAliveEnabled: z.boolean(), | |
keepAliveIdleSocketTtl: z.number().int().nonnegative(), // ms; ≥ 0 | |
maxOpenConnections: z.number().int().positive(), // ≥ 1 | |
maxFlushConcurrency: z.number().int().positive(), // ≥ 1 | |
flushIntervalMs: z.number().int().positive(), // ms; ≥ 1 | |
flushBatchSize: z.number().int().positive(), // ≥ 1 | |
leaderLockTimeoutMs: z.number().int().positive(), // ms; ≥ 1 | |
leaderLockExtendIntervalMs: z.number().int().positive(), // ms; ≥ 1 | |
leaderLockAcquireAdditionalTimeMs: z.number().int().nonnegative(), // ms; ≥ 0 | |
leaderLockRetryIntervalMs: z.number().int().positive(), // ms; ≥ 1 | |
ackIntervalSeconds: z.number().int().positive(), // s; ≥ 1 | |
waitForAsyncInsert: z.boolean(), | |
}); |
Summary by CodeRabbit
New Features
Enhancements
Tests