Skip to content

version 1.2.1#199

Open
yedidyakfir wants to merge 43 commits intomainfrom
develop
Open

version 1.2.1#199
yedidyakfir wants to merge 43 commits intomainfrom
develop

Conversation

@yedidyakfir
Copy link
Collaborator

No description provided.

…m-no-script-error-for-async-actions

Add NOSCRIPT error recovery for async operations
…ck for all fields, not just the fields of the current class
…fields-in-apipeline-for-inherited-model-fields

Fix assign fields in apipeline for inherited model fields
yedidyakfir and others added 12 commits February 5, 2026 12:29
…_meta to apipeline, this should be only for Atomic redis, not for users
…ting codebase

- STACK.md - Technologies and dependencies
- ARCHITECTURE.md - System design and patterns
- STRUCTURE.md - Directory layout
- CONVENTIONS.md - Code style and patterns
- TESTING.md - Test structure
- INTEGRATIONS.md - External services
- CONCERNS.md - Technical debt and issues

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…develop' into feature/195-support-asave-with-apipeline-for-batched-saves

# Conflicts:
#	CHANGELOG.md
…ve-with-apipeline-for-batched-saves

Support asave() within apipeline for batched saves
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request bumps the version from 1.2.0 to 1.2.1 and introduces a global rapyer.apipeline() context manager for batching Redis operations across multiple models, along with several important fixes for NOSCRIPT error recovery, nested pipeline support, and inherited field assignment in pipelines.

Changes:

  • Added global rapyer.apipeline() function for batching operations across multiple models without requiring a specific model instance
  • Implemented NOSCRIPT error recovery in arun_sha() to automatically re-register Lua scripts after Redis restarts
  • Fixed __setattr__ to properly handle inherited model fields by checking model_fields instead of __annotations__

Reviewed changes

Copilot reviewed 26 out of 26 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
rapyer/base.py Added global apipeline() function, updated __setattr__ to use model_fields, added client property, refactored model's apipeline() to use global function
rapyer/scripts/registry.py Added NOSCRIPT error recovery to arun_sha() with handle_noscript_error(), updated to use variant constants
rapyer/types/dct.py Updated apop() and apopitem() to pass self.Meta to arun_sha() for proper error recovery
rapyer/scripts/constants.py Added REDIS_VARIANT and FAKEREDIS_VARIANT constants
rapyer/scripts/loader.py Updated to use variant constants instead of string literals
rapyer/context.py Removed unused _context_xx_pipe variable
rapyer/init.py Exported new apipeline function
pyproject.toml Version bump to 1.2.1
CHANGELOG.md Documented all changes for version 1.2.1
docs/documentation/atomic-actions.md Added documentation for global pipeline feature
docs/api/rapyer-functions.md Added API documentation for apipeline() function
tests/integration/conftest.py Added flush_scripts and disable_noscript_recovery fixtures
tests/integration/pipeline/test_pipeline_noscript_recovery.py Updated tests to use new fixtures, removed inline script flushing
tests/integration/pipeline/test_pipeline_asave_batching.py New test file for batched asave operations
tests/integration/pipeline/test_apipeline_field_assignment.py New test file for field assignment in pipelines
tests/integration/dct/test_redis_dict.py Added test for persistent NOSCRIPT error
tests/unit/test_scripts.py Added test for NOSCRIPT handling with FakeRedis
tests/unit/types/test_dict_lua_scripts_with_fakeredis.py Added tests for dict operations NOSCRIPT recovery
tests/integration/pipeline/test_pipeline_context_manager_atomic_operations.py Moved import to top of file
.planning/codebase/* Added new planning documentation files (TESTING.md, STRUCTURE.md, STACK.md, etc.)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +699 to +741
try:
yield pipe
commands_backup = list(pipe.command_stack)
noscript_on_first_attempt = False
noscript_on_retry = False

try:
await pipe.execute()
except NoScriptError:
noscript_on_first_attempt = True
except ResponseError as exc:
if ignore_redis_error:
logger.warning(
"Swallowed ResponseError during pipeline.execute() with "
"ignore_redis_error=True: %s",
exc,
)
else:
raise

if noscript_on_first_attempt:
await scripts_registry.handle_noscript_error(redis, _meta)
evalsha_commands = [
(args, options)
for args, options in commands_backup
if args[0] == "EVALSHA"
]
# Retry execute the pipeline actions
async with redis.pipeline(transaction=True) as retry_pipe:
for args, options in evalsha_commands:
retry_pipe.execute_command(*args, **options)
try:
await retry_pipe.execute()
except NoScriptError:
noscript_on_retry = True

if noscript_on_retry:
raise PersistentNoScriptError(
"NOSCRIPT error persisted after re-registering scripts. "
"This indicates a server-side problem with Redis."
)
finally:
_context_var.reset(pipe_prev)
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Critical bug: The pipeline context manager exits immediately after line 698, before yielding the pipeline object. This causes the redis-py pipeline context manager to auto-execute the (empty) pipeline when exiting the context. The pipeline object is then yielded on line 700, but it's no longer in an active context manager state.

The entire try-finally block (lines 699-741) should be inside the async with redis.pipeline(transaction=True) as pipe: block. The correct structure should have the async with encompass everything up to the finally block.

Suggested change
try:
yield pipe
commands_backup = list(pipe.command_stack)
noscript_on_first_attempt = False
noscript_on_retry = False
try:
await pipe.execute()
except NoScriptError:
noscript_on_first_attempt = True
except ResponseError as exc:
if ignore_redis_error:
logger.warning(
"Swallowed ResponseError during pipeline.execute() with "
"ignore_redis_error=True: %s",
exc,
)
else:
raise
if noscript_on_first_attempt:
await scripts_registry.handle_noscript_error(redis, _meta)
evalsha_commands = [
(args, options)
for args, options in commands_backup
if args[0] == "EVALSHA"
]
# Retry execute the pipeline actions
async with redis.pipeline(transaction=True) as retry_pipe:
for args, options in evalsha_commands:
retry_pipe.execute_command(*args, **options)
try:
await retry_pipe.execute()
except NoScriptError:
noscript_on_retry = True
if noscript_on_retry:
raise PersistentNoScriptError(
"NOSCRIPT error persisted after re-registering scripts. "
"This indicates a server-side problem with Redis."
)
finally:
_context_var.reset(pipe_prev)
try:
yield pipe
commands_backup = list(pipe.command_stack)
noscript_on_first_attempt = False
noscript_on_retry = False
try:
await pipe.execute()
except NoScriptError:
noscript_on_first_attempt = True
except ResponseError as exc:
if ignore_redis_error:
logger.warning(
"Swallowed ResponseError during pipeline.execute() with "
"ignore_redis_error=True: %s",
exc,
)
else:
raise
if noscript_on_first_attempt:
await scripts_registry.handle_noscript_error(redis, _meta)
evalsha_commands = [
(args, options)
for args, options in commands_backup
if args[0] == "EVALSHA"
]
# Retry execute the pipeline actions
async with redis.pipeline(transaction=True) as retry_pipe:
for args, options in evalsha_commands:
retry_pipe.execute_command(*args, **options)
try:
await retry_pipe.execute()
except NoScriptError:
noscript_on_retry = True
if noscript_on_retry:
raise PersistentNoScriptError(
"NOSCRIPT error persisted after re-registering scripts. "
"This indicates a server-side problem with Redis."
)
finally:
_context_var.reset(pipe_prev)

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant