-
Notifications
You must be signed in to change notification settings - Fork 0
Dev -> main #89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Dev -> main #89
Conversation
* update readme with pagination query param * customize pagination
* handle pot config update * review changes: pot cupdate_or-create * ensure chef exists, add admins
* feat: index nadabot registry, provider and stamps * chore: add nadabot admin * chore: review fixes: print to log, model fixes * filter registration by plCategories (#35) * extend schema (#36) * add other tlds to cors whitelist (#38) * test groups and rules * add dev.local to cors allowed origins * fix payout-setting error * make PotPayout.paid_at nullable * fix sync db calls * add awaits to async db methods * remove debug logs * add https://dev.local to cors allowed origins * Paginate by page (#39) * gunicorn graceful reload * fix after install script * add graceful gunicorn restart to testnet after install script * avoid stopping services before migrations in after_install scripts * Pagination limit and readme update (#40) * update readme with pagination query param * customize pagination * add blacklist entry model * add https://dev.local back to cors allowed origins * handle pot config update (#37) * handle pot config update * review changes: pot cupdate_or-create * ensure chef exists, add admins * review changes to model * feat: index nadabot registry, provider and stamps * chore: add nadabot admin * chore: review fixes: print to log, model fixes * test groups and rules * add blacklist entry model * review changes to model * fix conflict issues * fix provider onchain_id reference * uncomment start block * add comment * fix group indexing and test --------- Co-authored-by: Lachlan Glen <54282009+lachlanglen@users.noreply.github.com>
* create util function for common price fetching * resolve reviews * add chains app & Chain model (#41) * default to NEAR chain when creating new account * add debug logging to handler.py for memory usage & timing * add additional debug logs for streamer processing time * remove BlockHeight db write in cache_block_height * set cache key in asyncio task * remove excess debug logs * add BlockHeight record update back * add debug log * Revert "Merge branch 'dev' into common-price-fetch" This reverts commit 54d526ece477b0a49957c5f085de7090f2d05542, reversing changes made to 8b6c363. * small fixes --------- Co-authored-by: Lachlan Glen <54282009+lachlanglen@users.noreply.github.com>
* handle zero net amount and migration script * remove unused import * small net_amount updates --------- Co-authored-by: Lachlan Glen <54282009+lachlanglen@users.noreply.github.com>
* change account to id * add donate contract
* index admin add whitelist deployer functin * fix payout pot id
* api endpoint for pot factories * fix response serializer
* method for factory update config * index config update functions * use pot config update method for indexing * clear deployers before adding
* remove caching of blockheight (just save in db) * add asyncio task scheudling back in for saving blockheight
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (12)
indexer_app/tasks.py (12)
6-6: Remove unused import Path.Not referenced anywhere in this module.
-from pathlib import Path
11-11: Drop unused Celery signal import.
worker_shutdownisn’t used (handler is commented out).-from celery.signals import task_revoked, worker_shutdown +from celery.signals import task_revoked
13-14: Prune unused Django model/function imports.
DecimalField,Value,Cast,NullIfare unused.-from django.db.models import Count, DecimalField, Q, Sum, Value -from django.db.models.functions import Cast, NullIf +from django.db.models import Count, Q, Sum
16-18: Tighten Stellar imports.
stellar_xdrandscvalimports are unused (you callstellar_sdk.scvaldirectly). Keep onlyAddress.-from stellar_sdk import Address, stellar_xdr, scval +from stellar_sdk import Address
23-23: Import only what’s used from grantpicks.models.This file only uses
StellarEvent.-from grantpicks.models import Round, RoundDeposit, StellarEvent, ProjectContact +from grantpicks.models import StellarEvent
25-25: Trim unused imports from pots.models.
PotApplication,PotApplicationStatusaren’t used.-from pots.models import Pot, PotApplication, PotApplicationStatus, PotPayout +from pots.models import Pot, PotPayout
442-442: Remove unnecessary f-string.- jobs_logger.info(f"Finished processing Stellar events.") + jobs_logger.info("Finished processing Stellar events.")
201-206: Amounts handled as strings; prefer numeric model fields.
Donation.total_amountappears to be a CharField leading to.isdigit()guards. Consider migrating toDecimalField/BigIntegerFieldand drop string checks + casts.
218-231: Add timeout and robust error handling to FASTNEAR request.Prevent task hangs and surface parse errors cleanly.
- response = requests.get(url) - if response.status_code != 200: - jobs_logger.error( - f"Failed to get matching pool balance for pot {pot.account}: {response.text}" - ) - else: - data = response.json() - pot.matching_pool_balance = data["matching_pool_balance"] - jobs_logger.info( - f"Matching pool balance for pot {pot.account}: {pot.matching_pool_balance}" - ) + try: + response = requests.get(url, timeout=10) + response.raise_for_status() + data = response.json() + pot.matching_pool_balance = data["matching_pool_balance"] + jobs_logger.info( + "Matching pool balance for pot %s: %s", + pot.account, + pot.matching_pool_balance, + ) + except (requests.RequestException, ValueError, KeyError): + jobs_logger.exception( + "Failed to get matching pool balance for pot %s from %s", + pot.account, + url, + )
239-245: Same string-amount concern for public donations.Second occurrence of
.isdigit()pattern; will simplify once amounts are numeric.
341-343: Harden contract ID validation.Direct list comparison is brittle; handle empty/missing IDs.
- contract_ids = [settings.STELLAR_CONTRACT_ID, settings.STELLAR_PROJECTS_REGISTRY_CONTRACT] - if contract_ids == ['', '']: - return + contract_ids = [settings.STELLAR_CONTRACT_ID, settings.STELLAR_PROJECTS_REGISTRY_CONTRACT] + if not any(contract_ids) or all((cid or "").strip() == "" for cid in contract_ids): + jobs_logger.info("No Stellar contract IDs configured, skipping indexer") + return
401-437: Handle unknown event types to avoid infinite retries; tidy branches.Unmatched events will be retried forever; add a fallback. Minor branch cleanup.
- if event_name == 'c_project': + if event_name == 'c_project': event.processed = process_project_event(event_data) - elif event_name == 'c_round' or event_name == 'u_round': + elif event_name in ('c_round', 'u_round'): # Mark event as processed event.processed = create_or_update_round(event_data, event.contract_id, event.ingested_at) elif event_name == 'apply_to_round': # Mark event as processed event.processed = process_application_to_round(event_data, event.transaction_hash) elif event_name == 'c_app': event.processed = create_round_application(event_data, event.transaction_hash) - elif event_name == 'u_app': # application review and aproval + elif event_name == 'u_app': # application review and approval event.processed = update_application(event_data, event.transaction_hash) elif event_name == 'u_ap': event.processed = update_approved_projects(event_data) - elif event_name == 'c_depo': + elif event_name == 'c_depo': event.processed = process_rounds_deposit_event(event_data, event.transaction_hash) - elif event_name == 'c_vote': + elif event_name == 'c_vote': event.processed = process_vote_event(event_data, event.transaction_hash) elif event_name == "c_pay": event.processed = create_round_payout(event_data, event.transaction_hash) elif event_name == "u_pay": event.processed = update_round_payout(event_data, event.transaction_hash) + else: + jobs_logger.warning("Unknown Stellar event type: %s (id=%s)", event_name, event.id) + event.processed = True # avoid hot-loop retries event.save()
🧹 Nitpick comments (11)
indexer_app/tasks.py (11)
30-30: Remove unused constant.
CURRENT_BLOCK_HEIGHT_KEYisn’t referenced.-CURRENT_BLOCK_HEIGHT_KEY = "current_block_height"
90-93: Raise a specific exception on stream stall.Avoid generic
Exception; propagate intent cleanly.- except asyncio.TimeoutError: - logger.warning("Stream stalled: no new blocks within timeout, restarting...") # raise Exception so sytemd can restart the worker - raise Exception("Stream stalled: restarting...") + except asyncio.TimeoutError: + logger.warning("Stream stalled: no new blocks within timeout, restarting...") + raise RuntimeError("Stream stalled: restarting...") from None
94-96: Log with stacktrace for unexpected errors.Use
exception()to retain traceback.- except Exception as e: - logger.error(f"Error in streamer_messages_queue: {e}") + except Exception: + logger.exception("Error in streamer_messages_queue")
106-109: Tune log copy for production signal.Keep logs neutral and structured.
- logger.info(f"what's the start block, pray tell? {start_block-1}") + logger.info("Starting NEAR indexer from block %s", start_block - 1)
167-173: Use logging.exception for donation price fetch failures.Preserve tracebacks for triage.
- except Exception as e: - jobs_logger.error( - f"Failed to fetch USD prices for donation {donation.id}: {e}" - ) + except Exception: + jobs_logger.exception("Failed to fetch USD prices for donation %s", donation.id)
181-186: Use logging.exception for payout price fetch failures.- except Exception as e: - jobs_logger.error(f"Failed to fetch USD prices for payout {payout.id}: {e}") + except Exception: + jobs_logger.exception("Failed to fetch USD prices for payout %s", payout.id)
199-200: Avoid print in tasks.Use the jobs logger for consistency.
- print(f"Processing pot: {pot.account}") + jobs_logger.info("Processing pot: %s", pot.account)
273-274: Use logging.exception on pot stats failure.- except Exception as e: - jobs_logger.error(f"Failed to update statistics for pot {pot.account}: {e}") + except Exception: + jobs_logger.exception("Failed to update statistics for pot %s", pot.account)
338-346: Make Stellar RPC URL robust with sensible defaults.Fallback by environment when
STELLAR_RPC_URLisn’t configured.- server = stellar_sdk.SorobanServer( - settings.STELLAR_RPC_URL - ) + rpc_url = getattr( + settings, + "STELLAR_RPC_URL", + "https://soroban-testnet.stellar.org" + if settings.ENVIRONMENT in ("testnet", "local") + else "https://mainnet.sorobanrpc.com", + ) + server = stellar_sdk.SorobanServer(rpc_url)
386-388: Use logging.exception for ingestion failures.- except Exception as e: - jobs_logger.error(f"Error processing ledger {start_sequence}: {e}") + except Exception: + jobs_logger.exception("Error processing ledger %s", start_sequence)
439-441: Log processing exceptions with stacktrace and clean formatting.- except Exception as e: - jobs_logger.error(f"Error processing Stellar event { event_name, event.id}: {e}") + except Exception: + jobs_logger.exception("Error processing Stellar event %s (id=%s)", event_name, event.id)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
indexer_app/tasks.py(6 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
indexer_app/tasks.py (3)
grantpicks/models.py (5)
Round(94-378)RoundDeposit(381-459)StellarEvent(494-507)ProjectContact(24-27)save(373-378)pots/models.py (7)
Pot(109-389)PotApplication(401-499)PotApplicationStatus(392-397)PotPayout(554-672)fetch_usd_prices(656-672)save(492-494)save(651-653)indexer_app/utils.py (12)
create_or_update_round(1616-1698)create_round_application(1735-1770)create_round_payout(1815-1845)get_block_height(1325-1329)get_ledger_sequence(1343-1346)process_application_to_round(1701-1731)process_project_event(1533-1612)process_rounds_deposit_event(1773-1811)process_vote_event(1468-1530)save_block_height(1313-1322)update_ledger_sequence(1333-1341)update_round_payout(1848-1869)
🪛 Ruff (0.12.2)
indexer_app/tasks.py
63-68: Store a reference to the return value of asyncio.create_task
(RUF006)
92-92: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
92-92: Create your own exception
(TRY002)
92-92: Avoid specifying long messages outside the exception class
(TRY003)
184-184: Do not catch blind exception: Exception
(BLE001)
185-185: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
221-221: Probable use of requests call without timeout
(S113)
273-273: Do not catch blind exception: Exception
(BLE001)
274-274: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
334-334: Avoid specifying long messages outside the exception class
(TRY003)
386-386: Do not catch blind exception: Exception
(BLE001)
387-387: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
439-439: Do not catch blind exception: Exception
(BLE001)
440-440: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
442-442: f-string without any placeholders
Remove extraneous f prefix
(F541)
🔇 Additional comments (3)
indexer_app/tasks.py (3)
40-47: LakeConfig init looks good.ENV-based network selection and credentials wiring are correct.
116-129: Spot indexer task wiring looks good.Dedicated queue and event loop handling are appropriate.
331-335: LGTM: Address JSON serialization helper.Simple and correct conversion for
Address.
| asyncio.create_task( | ||
| save_block_height( | ||
| streamer_message.block.header.height, | ||
| streamer_message.block.header.timestamp, | ||
| ) | ||
| ) | ||
| save_end_time = time.time() | ||
|
|
||
| logger.info( | ||
| f"Time to save block height: {save_end_time - save_start_time:.4f} seconds" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Store/await the task to persist block height; fix timing metrics.
Unreferenced asyncio.create_task can be GC’d and your timing logs don’t reflect the write completion.
- asyncio.create_task(
+ save_task = asyncio.create_task(
save_block_height(
streamer_message.block.header.height,
streamer_message.block.header.timestamp,
)
)
- save_end_time = time.time()
+ # Keep fast, but make metrics meaningful
+ await asyncio.wait_for(save_task, timeout=2)
+ save_end_time = time.time()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| asyncio.create_task( | |
| save_block_height( | |
| streamer_message.block.header.height, | |
| streamer_message.block.header.timestamp, | |
| ) | |
| ) | |
| save_end_time = time.time() | |
| logger.info( | |
| f"Time to save block height: {save_end_time - save_start_time:.4f} seconds" | |
| ) | |
| save_task = asyncio.create_task( | |
| save_block_height( | |
| streamer_message.block.header.height, | |
| streamer_message.block.header.timestamp, | |
| ) | |
| ) | |
| # Keep fast, but make metrics meaningful | |
| await asyncio.wait_for(save_task, timeout=2) | |
| save_end_time = time.time() | |
| logger.info( | |
| f"Time to save block height: {save_end_time - save_start_time:.4f} seconds" | |
| ) |
🧰 Tools
🪛 Ruff (0.12.2)
63-68: Store a reference to the return value of asyncio.create_task
(RUF006)
🤖 Prompt for AI Agents
In indexer_app/tasks.py around lines 63 to 73, the code spawns save_block_height
with asyncio.create_task and immediately logs timing, which can be GC'd and does
not measure the actual write time; replace the unreferenced create_task by
either assigning it to a variable and awaiting it or simply await
save_block_height(...) directly, move the save_start_time to immediately before
the await and set save_end_time after the await so the logged duration reflects
completion, and wrap the await in a try/except to catch and log any errors from
the persistence operation.
testnet -> dev
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (9)
indexer_app/tasks.py (6)
6-6: Prune unused imports.These symbols aren’t used here; drop them to reduce noise.
-from pathlib import Path -from celery.signals import task_revoked, worker_shutdown +from celery.signals import task_revoked -from django.db.models import Count, DecimalField, Q, Sum, Value -from django.db.models.functions import Cast, NullIf +from django.db.models import Count, Q, Sum -from stellar_sdk import Address, stellar_xdr, scval +from stellar_sdk import Address -from grantpicks.models import Round, RoundDeposit, StellarEvent, ProjectContact +from grantpicks.models import StellarEvent -from pots.models import Pot, PotApplication, PotApplicationStatus, PotPayout +from pots.models import Pot, PotPayoutAlso applies to: 11-11, 13-14, 18-18, 23-23, 25-25
61-73: Don’t drop the task reference; optionally await for accurate timing.Unreferenced create_task can be GC’d and your timing doesn’t reflect completion.
- save_start_time = time.time() - # Update current block height - asyncio.create_task( - save_block_height( - streamer_message.block.header.height, - streamer_message.block.header.timestamp, - ) - ) - save_end_time = time.time() + save_start_time = time.time() + save_task = asyncio.create_task( + save_block_height( + streamer_message.block.header.height, + streamer_message.block.header.timestamp, + ) + ) + # keep fast but meaningful metrics + await asyncio.wait_for(save_task, timeout=2) + save_end_time = time.time()
202-207: Avoid string math; prefer numeric fields/aggregates.Summing CharField amounts with isdigit guards is brittle and slow. Store amounts as numeric (Decimal/BigInteger) and use DB aggregates.
I can draft a migration to convert Donation.total_amount and related Pot fields to numerics and refactor these sums to DB-side aggregates. Want a PR patch?
Also applies to: 239-246
341-343: Harden contract ID validation.Direct list equality to ['', ''] is fragile. Skip if all IDs are empty/blank.
- contract_ids = [settings.STELLAR_CONTRACT_ID, settings.STELLAR_PROJECTS_REGISTRY_CONTRACT] - if contract_ids == ['', '']: - return + contract_ids = [settings.STELLAR_CONTRACT_ID, settings.STELLAR_PROJECTS_REGISTRY_CONTRACT] + if not any(cid and cid.strip() for cid in contract_ids): + jobs_logger.info("No Stellar contract IDs configured, skipping indexer") + return
217-231: Add timeout and error handling to RPC call.HTTP without timeout can hang the task; handle RequestException.
- response = requests.get(url) + try: + response = requests.get(url, timeout=10) + except requests.RequestException as req_err: + jobs_logger.error(f"RPC error getting matching pool balance for {pot.account}: {req_err}") + continue
401-437: Handle unknown event types; fix stray f-string.Prevent perpetual retries for unhandled events and drop the unnecessary f-prefix.
elif event_name == "u_pay": event.processed = update_round_payout(event_data, event.transaction_hash) - event.save() + else: + jobs_logger.warning(f"Unknown Stellar event type: {event_name} (id={event.id})") + # Avoid infinite reprocessing; adjust policy as needed. + event.processed = True + event.save() @@ - jobs_logger.info(f"Finished processing Stellar events.") + jobs_logger.info("Finished processing Stellar events.")Also applies to: 442-442
base/settings.py (3)
31-31: Do not ship with a hardcoded SECRET_KEY fallback.If the env var is missing in prod, this is a critical security risk.
-SECRET_KEY = os.environ.get("PL_DJANGO_SECRET_KEY", "django-insecure-=r_v_es6w6rxv42^#kc2hca6p%=fe_*cog_5!t%19zea!enlju") +SECRET_KEY = os.environ["PL_DJANGO_SECRET_KEY"] # fail fast if unsetIf you need a safer dev default, gate it on ENVIRONMENT == "local".
76-76: Parameterize NEAR_GRANTPICKS_CONTRACT_ID; avoid empty dev default.Empty IDs lead to invalid RPC URLs and skipped processing.
-NEAR_GRANTPICKS_CONTRACT_ID = "v2.grantpicks.potlock.testnet" if ENVIRONMENT == "testnet" else ("" if ENVIRONMENT == "dev" else "") +NEAR_GRANTPICKS_CONTRACT_ID = os.getenv( + "PL_NEAR_GRANTPICKS_CONTRACT_ID", + "v2.grantpicks.potlock.testnet" if ENVIRONMENT == "testnet" else "" +)Consider failing fast if empty outside local/test envs.
246-251: Tighten grantpicks CORS regex.Current pattern matches malicious-grantpicks.com.
CORS_ALLOWED_ORIGIN_REGEXES = [ "^https:\/\/potlock-next-[\w-]+-potlock\.vercel\.app\/?$", - "^https?:\/\/.*\.?grantpicks\.com$", + "^https?:\/\/(?:[A-Za-z0-9-]+\.)?grantpicks\.com\/?$", "^https:\/\/staging\.app\.potlock\.(org|io|xyz|app)\/?$", "^https:\/\/staging\.alpha\.potlock\.(org|io|xyz|app)\/?$" ]
🧹 Nitpick comments (4)
indexer_app/tasks.py (3)
199-200: Use logger, not print, in Celery task.Replace print with jobs_logger for consistency.
- print(f"Processing pot: {pot.account}") + jobs_logger.info(f"Processing pot: {pot.account}")
386-387: Use logging.exception for tracebacks; narrow excepts.These broad catches lose stack traces.
- except Exception as e: - jobs_logger.error(f"Error processing ledger {start_sequence}: {e}") + except Exception: + jobs_logger.exception(f"Error processing ledger {start_sequence}") @@ - except Exception as e: - jobs_logger.error(f"Error processing Stellar event { event_name, event.id}: {e}") + except Exception: + jobs_logger.exception(f"Error processing Stellar event (type={event.event_type}, id={event.id})")Also applies to: 439-441
30-30: Remove unused constant.CURRENT_BLOCK_HEIGHT_KEY isn’t referenced.
-CURRENT_BLOCK_HEIGHT_KEY = "current_block_height"base/settings.py (1)
334-334: Avoid LOG_LEVEL redefinition; use one env var source.You already set LOG_LEVEL at line 51; this shadowing is confusing.
-LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO").upper()Use the existing LOG_LEVEL and keep a single env name (prefer PL_LOG_LEVEL).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
base/settings.py(6 hunks)indexer_app/tasks.py(6 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
indexer_app/tasks.py (3)
grantpicks/models.py (5)
Round(94-378)RoundDeposit(381-459)StellarEvent(494-507)ProjectContact(24-27)save(373-378)pots/models.py (7)
Pot(109-389)PotApplication(401-499)PotApplicationStatus(392-397)PotPayout(554-672)fetch_usd_prices(656-672)save(492-494)save(651-653)indexer_app/utils.py (12)
create_or_update_round(1616-1698)create_round_application(1735-1770)create_round_payout(1815-1845)get_block_height(1325-1329)get_ledger_sequence(1343-1346)process_application_to_round(1701-1731)process_project_event(1533-1612)process_rounds_deposit_event(1773-1811)process_vote_event(1468-1530)save_block_height(1313-1322)update_ledger_sequence(1333-1341)update_round_payout(1848-1869)
🪛 Ruff (0.12.2)
indexer_app/tasks.py
63-68: Store a reference to the return value of asyncio.create_task
(RUF006)
92-92: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
92-92: Create your own exception
(TRY002)
92-92: Avoid specifying long messages outside the exception class
(TRY003)
184-184: Do not catch blind exception: Exception
(BLE001)
185-185: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
221-221: Probable use of requests call without timeout
(S113)
273-273: Do not catch blind exception: Exception
(BLE001)
274-274: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
334-334: Avoid specifying long messages outside the exception class
(TRY003)
386-386: Do not catch blind exception: Exception
(BLE001)
387-387: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
439-439: Do not catch blind exception: Exception
(BLE001)
440-440: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
442-442: f-string without any placeholders
Remove extraneous f prefix
(F541)
base/settings.py
76-76: Useless if-else condition
(RUF034)
🔇 Additional comments (2)
indexer_app/tasks.py (2)
331-335: LGTM: JSON-serializing Address.Simple and correct; keeps event data JSON-safe.
338-341: Correct: using a Stellar RPC (Soroban) endpoint.Good move off Horizon; configurable via settings.
base/settings.py
Outdated
| RECLAIM_APP_ID = os.environ.get("PL_RECLAIM_APP_ID") | ||
| RECLAIM_APP_SECRET = os.environ.get("PL_RECLAIM_APP_SECRET") | ||
| RECLAIM_TWITTER_PROVIDER_ID = os.environ.get("PL_RECLAIM_TWITTER_PROVIDER_ID") | ||
| INDEXER_STREAMER_WAIT_TIME = os.environ.get("PL_INDEXER_STREAMER_WAIT_TIME", 300) # in seconds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cast INDEXER_STREAMER_WAIT_TIME to int.
Currently a string when provided via env; asyncio.wait_for requires numeric.
-INDEXER_STREAMER_WAIT_TIME = os.environ.get("PL_INDEXER_STREAMER_WAIT_TIME", 300) # in seconds
+INDEXER_STREAMER_WAIT_TIME = int(os.environ.get("PL_INDEXER_STREAMER_WAIT_TIME", "300")) # seconds📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| INDEXER_STREAMER_WAIT_TIME = os.environ.get("PL_INDEXER_STREAMER_WAIT_TIME", 300) # in seconds | |
| INDEXER_STREAMER_WAIT_TIME = int(os.environ.get("PL_INDEXER_STREAMER_WAIT_TIME", "300")) # seconds |
🤖 Prompt for AI Agents
In base/settings.py around line 65, INDEXER_STREAMER_WAIT_TIME is currently read
as a string from the environment which will break asyncio.wait_for; change the
assignment to convert the env value to an int (use
int(os.environ.get("PL_INDEXER_STREAMER_WAIT_TIME", 300)) or
int(os.environ.get("PL_INDEXER_STREAMER_WAIT_TIME", "300"))), and optionally
validate or catch ValueError to surface a clear error if the provided env var is
not an integer.
| except asyncio.TimeoutError: | ||
| logger.warning("Stream stalled: no new blocks within timeout, restarting...") # raise Exception so sytemd can restart the worker | ||
| raise Exception("Stream stalled: restarting...") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Raise a specific exception; chain correctly.
Avoid bare Exception and include cause semantics.
+class StreamStalled(Exception):
+ """NEAR lake stream stalled; restart worker."""
+ pass
@@
- except asyncio.TimeoutError:
- logger.warning("Stream stalled: no new blocks within timeout, restarting...") # raise Exception so sytemd can restart the worker
- raise Exception("Stream stalled: restarting...")
+ except asyncio.TimeoutError as err:
+ logger.warning("Stream stalled: no new blocks within timeout; restarting...")
+ raise StreamStalled("Stream stalled: restarting") from err📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except asyncio.TimeoutError: | |
| logger.warning("Stream stalled: no new blocks within timeout, restarting...") # raise Exception so sytemd can restart the worker | |
| raise Exception("Stream stalled: restarting...") | |
| # Define a specific exception for stream stalls | |
| class StreamStalled(Exception): | |
| """NEAR lake stream stalled; restart worker.""" | |
| pass | |
| # … inside your streaming loop or function … | |
| try: | |
| # existing streaming logic… | |
| except asyncio.TimeoutError as err: | |
| logger.warning("Stream stalled: no new blocks within timeout; restarting...") | |
| raise StreamStalled("Stream stalled: restarting") from err |
🧰 Tools
🪛 Ruff (0.12.2)
92-92: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
92-92: Create your own exception
(TRY002)
92-92: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
In indexer_app/tasks.py around lines 90 to 93, the code catches
asyncio.TimeoutError then raises a bare Exception which loses type fidelity and
stack context; replace the bare Exception with a specific exception (either
define and raise a custom StreamStalledError in this module or raise a suitable
built-in like RuntimeError) and chain the original TimeoutError using "raise
NewException('Stream stalled: restarting...') from e" so the original timeout is
preserved as the __cause__.
allow app to dev
Testnet -> dev
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 48
♻️ Duplicate comments (47)
indexer_app/examples.txt (1)
5-32: Naming conflict: Same struct name used for different structures — already raised in past reviewBoth donation types use the same struct name
DonationExternal(lines 5 and 36) but with distinct field schemas. Although this is a documentation file, the naming creates ambiguity and does not reflect how these would be implemented in actual code (where two structs cannot share the same name in the same scope).To improve clarity, rename the structures to reflect their different purposes, or document them as variants of a single enum type:
POT DONATION: PotDonationExternal DIRECT DONATION: DirectDonationExternalAlternatively, present them as:
enum DonationExternal { Pot { /* pot donation fields */ }, Direct { /* direct donation fields */ } }Also applies to: 36-57
.github/workflows/deploy-prod.yml (1)
19-19: Update GitHub Actions checkout version to @v4.The
actions/checkout@v3version is outdated. Update to a newer version for compatibility with the latest GitHub Actions runner environment.- uses: actions/checkout@v3 + uses: actions/checkout@v4pots/migrations/0005_alter_potapplication_updated_at.py (1)
13-21: Verify manual timestamp management for updated_at field.The updated_at field lacks auto_now=True, meaning it won't automatically update on model save. Ensure that application code properly manages this timestamp, or consider adding auto_now=True for automatic behavior.
pots/migrations/0006_alter_pot_custom_sybil_checks_and_more.py (2)
13-78: Critical: CharField requires max_length parameter.All CharField definitions for the Pot model are missing the required max_length parameter. Django will raise an error when this migration is applied. Each field needs an appropriate max_length:
- Financial fields (matching_pool_balance, min_matching_pool_donation_amount, total_matching_pool, total_public_donations): Consider max_length=78 to accommodate large numbers as strings
- Provider/config fields (custom_sybil_checks, protocol_config_provider, registry_provider, sybil_wrapper_provider): Consider max_length=255 or appropriate value
97-101: Critical: CharField requires max_length for amount field.The potpayout.amount field is being converted to CharField without specifying max_length. This will cause the migration to fail. For financial amounts stored as strings, consider max_length=78 to accommodate large numeric values.
activities/admin.py (1)
46-53: Re: commented-out admin permissions — confirm intent and guardrails.These remain commented, effectively reverting to default Django permissions. If this relaxation is intentional, document it and ensure role-based restrictions are enforced elsewhere (e.g., per-model permissions or read-only admin). Otherwise, restore them.
activities/models.py (1)
54-60: Reminder: audit null handling at call sites for tx_hashPrior review already flagged call sites that interpolate tx_hash. Ensure None/"" are handled in URLs and serializers.
#!/bin/bash # Find potential unsafe interpolations rg -nP --type=py -C2 '\btx_hash\b' | sed -n '1,120p'base/api.py (1)
122-132: Validate inputs, use request.data, map platform→provider, use DRF Response + logging; avoid broad exceptCurrent code risks 500s, ignores platform, prints, and mixes JsonResponse.
from reclaim_python_sdk import ReclaimProofRequest +import logging @@ class ReclaimProofRequestConfigSerializer(serializers.Serializer): reclaimProofRequestConfig = serializers.CharField() +class ReclaimProofRequestInputSerializer(serializers.Serializer): + platform = serializers.CharField(required=False) # e.g., "twitter" + handle = serializers.CharField(required=True, min_length=1, max_length=64) + +logger = logging.getLogger(__name__) @@ - def post(self, request: Request, *args, **kwargs): - APP_ID = settings.RECLAIM_APP_ID - APP_SECRET = settings.RECLAIM_APP_SECRET - PROVIDER_ID = settings.RECLAIM_TWITTER_PROVIDER_ID - - platform = request.query_params.get("platform") - handle = request.query_params.get("handle") - - try: - reclaim_proof_func = async_to_sync(ReclaimProofRequest.init) - reclaim_proof_request = reclaim_proof_func(APP_ID, APP_SECRET, PROVIDER_ID, {"context": {"handle": handle}}) - # reclaim_proof_request.set_app_callback_url("https://your-backend.com/receive-proofs") - reclaim_proof_request_config = reclaim_proof_request.to_json_string() - - return JsonResponse({"reclaimProofRequestConfig": reclaim_proof_request_config}) - except Exception as error: - print(f"Error generating request config: {error}") - return JsonResponse({"error": "Failed to generate request config"}, status=500) + def post(self, request: Request): + data = request.data or {} + serializer = ReclaimProofRequestInputSerializer(data=data) + if not serializer.is_valid(): + return Response(serializer.errors, status=400) + handle = serializer.validated_data["handle"] + platform = serializer.validated_data.get("platform", "twitter") + + APP_ID = settings.RECLAIM_APP_ID + APP_SECRET = settings.RECLAIM_APP_SECRET + provider_map = { + "twitter": getattr(settings, "RECLAIM_TWITTER_PROVIDER_ID", None), + # "github": getattr(settings, "RECLAIM_GITHUB_PROVIDER_ID", None), + } + PROVIDER_ID = provider_map.get(platform) + if not (APP_ID and APP_SECRET and PROVIDER_ID): + logger.error("Missing Reclaim config: app_id/app_secret/provider_id") + return Response({"error": "Server misconfiguration"}, status=500) + + try: + reclaim_init = async_to_sync(ReclaimProofRequest.init) + req = reclaim_init(APP_ID, APP_SECRET, PROVIDER_ID, {"context": {"handle": handle}}) + config_str = req.to_json_string() + return Response({"reclaimProofRequestConfig": config_str}, status=200) + except Exception as exc: + logger.exception("Failed to generate Reclaim request config: %s", exc) + return Response({"error": "Failed to generate request config"}, status=500)#!/bin/bash # Quick grep for remaining JsonResponse in DRF views (optional cleanup) rg -nP --type=py 'class\s+\w+View\(.+APIView.*\)[\s\S]*JsonResponse' -C2Also applies to: 95-116
base/settings.py (4)
68-70: Cast INDEXER_STREAMER_WAIT_TIME to intCurrent assignment yields str; cast for consumers like asyncio.
-INDEXER_STREAMER_WAIT_TIME = os.environ.get( - "PL_INDEXER_STREAMER_WAIT_TIME", 300 -) # in seconds +INDEXER_STREAMER_WAIT_TIME = int(os.environ.get("PL_INDEXER_STREAMER_WAIT_TIME", "300")) # seconds
271-276: Tighten grantpicks CORS regexCurrent pattern also matches malicious-grantpicks.com.
CORS_ALLOWED_ORIGIN_REGEXES = [ "^https:\/\/potlock-next-[\w-]+-potlock\.vercel\.app\/?$", - "^https?:\/\/.*\.?grantpicks\.com$", + "^https?:\/\/(?:[A-Za-z0-9-]+\.)?grantpicks\.com\/?$", "^https:\/\/staging\.app\.potlock\.(org|io|xyz|app)\/?$", "^https:\/\/staging\.alpha\.potlock\.(org|io|xyz|app)\/?$", ]
31-34: Don’t ship an insecure SECRET_KEY fallback (non-local)Require env var outside local; keep a safe dev fallback only for local.
-SECRET_KEY = os.environ.get( - "PL_DJANGO_SECRET_KEY", - "django-insecure-=r_v_es6w6rxv42^#kc2hca6p%=fe_*cog_5!t%19zea!enlju", -) +if ENVIRONMENT == "local": + SECRET_KEY = os.environ.get("PL_DJANGO_SECRET_KEY", "dev-insecure-key") +else: + SECRET_KEY = os.environ["PL_DJANGO_SECRET_KEY"]
92-97: Parameterize NEAR_GRANTPICKS_CONTRACT_ID; remove useless elseAvoid empty default that breaks RPC URLs; read from env with sensible default for testnet.
-NEAR_GRANTPICKS_CONTRACT_ID = ( - "v2.grantpicks.potlock.testnet" - if ENVIRONMENT == "testnet" - else ("" if ENVIRONMENT == "dev" else "") -) +NEAR_GRANTPICKS_CONTRACT_ID = os.getenv( + "PL_NEAR_GRANTPICKS_CONTRACT_ID", + "v2.grantpicks.potlock.testnet" if ENVIRONMENT == "testnet" else "" +)chains/migrations/0003_add_stellar_chain.py (1)
4-15: Ensure Chain required fields and robust migration behaviorChain.rpc_url/explorer_url are likely non-null; set defaults and handle errors; fix comment and lints.
def create_stellar_chain(apps, schema_editor): Chain = apps.get_model("chains", "Chain") - # Create the "stellar" chain - stellar_chain, created = Chain.objects.get_or_create( - name="stellar", defaults={"evm_compat": False} - ) + # Create the "stellar" chain with required fields + try: + stellar_chain, _created = Chain.objects.get_or_create( + name="stellar", + defaults={ + "evm_compat": False, + "rpc_url": "https://soroban-testnet.stellar.org", + "explorer_url": "https://stellar.expert/explorer/testnet", + }, + ) + except Exception as e: + raise RuntimeError(f"Failed to create 'stellar' chain: {e}") @@ - # Set "stellar" chain as the default for all existing accounts + # Set "stellar" chain for all existing Round objects Round = apps.get_model("grantpicks", "Round") Round.objects.update(chain=stellar_chain) - print("Updated all Rounds to use stellar chain")nadabot/serializers.py (1)
3-3: Remove unused import.Stamp isn’t used here; drop it.
-from .models import NadabotRegistry, Provider, Stamp +from .models import NadabotRegistry, Providerchains/migrations/0002_add_near_chain.py (1)
15-16: Don’t overwrite existing chains; avoid noisy prints in migrations.Update only where chain is NULL to preserve pre-set chains, and drop the print.
- Account = apps.get_model("accounts", "Account") - Account.objects.update(chain=near_chain) - print("Updated all accounts to use NEAR chain") + Account = apps.get_model("accounts", "Account") + # Only set NEAR for accounts that don't yet have a chain + Account.objects.filter(chain__isnull=True).update(chain=near_chain)accounts/admin.py (1)
46-53: Security: admin perms commented out (reiterating).Commenting out has_add/change/delete reverts to default Django perms and may allow unintended modifications. If intentional, document and enforce via roles; otherwise restore explicit denials.
donations/admin.py (1)
45-46: Potential null-reference issue in token_address method.Accessing
obj.token.accountdirectly may raise an AttributeError ifobj.tokenisNone. Add a null-check to safely handle cases where token is not set.Apply this diff to add null-safety:
def token_address(self, obj): - return obj.token.account + return obj.token.account if obj.token else Nonegrantpicks/migrations/0005_remove_votepair_project_votepair_projects_and_more.py (1)
14-36: Missing data migration to preserve existing relationships.The migration removes the
projectfield without preserving existing relationships. If there are existing VotePair records with project associations, this data will be lost. A data migration should be added to transfer data from the oldprojectfield to the newprojectsandvoted_projectfields before removal.Consider reordering operations to: add new fields → migrate data → remove old field:
def migrate_project_data(apps, schema_editor): VotePair = apps.get_model("grantpicks", "VotePair") for votepair in VotePair.objects.filter(project__isnull=False): votepair.projects.add(votepair.project) votepair.voted_project = votepair.project votepair.save() # In operations list: operations = [ # First add new fields migrations.AddField(...), # projects migrations.AddField(...), # voted_project # Then migrate data migrations.RunPython(migrate_project_data, reverse_code=migrations.RunPython.noop), # Finally remove old field migrations.RemoveField(...), # project ]accounts/management/commands/fetchsocialprofiledata.py (1)
16-23: Add error handling and eliminate redundant database query.The loop contains a redundant query on line 17 - you already have
account.idfrom line 13, but then re-fetch the entire Account object. Additionally, as noted in previous reviews, this command lacks error handling which could cause incomplete execution if any account processing fails.Apply this diff to fix the redundant query and add error handling:
+ total = len(account_addresses) - for address in account_addresses: - account = Account.objects.get(id=address) + for i, account in enumerate(Account.objects.all(), 1): + try: + self.stdout.write(f"Processing {i}/{total}: {account.id}") - account.fetch_near_social_profile_data() + account.fetch_near_social_profile_data() self.stdout.write( - self.style.SUCCESS(f"Fetched social profile data for {address}") + self.style.SUCCESS(f"✓ Fetched social profile data for {account.id}") ) + except Exception as e: + self.stdout.write( + self.style.ERROR(f"✗ Error processing {account.id}: {str(e)}") + ) # wait for 1 second to avoid rate limiting time.sleep(1)donations/migrations/0010_migrate_ft_to_token.py (1)
6-12: Add error handling and batching for data migration.The migration lacks error handling if a Token doesn't exist for a given donation.ft, which would cause the entire migration to fail. Additionally, loading all donations at once could cause memory issues with large datasets.
Apply this diff to add error handling and batching:
def migrate_ft_to_token(apps, schema_editor): Donation = apps.get_model("donations", "Donation") Token = apps.get_model("tokens", "Token") - for donation in Donation.objects.all(): - token = Token.objects.get(id=donation.ft) - donation.token = token - donation.save() + batch_size = 1000 + total = Donation.objects.count() + for i in range(0, total, batch_size): + for donation in Donation.objects.all()[i:i+batch_size]: + try: + token = Token.objects.get(id=donation.ft) + donation.token = token + donation.save() + except Token.DoesNotExist: + # Log or handle missing token + passdonations/api.py (1)
59-63: Use safe .pop() with default values to prevent KeyError.Multiple
.pop()calls assume specific keys exist in the response. If the contract response structure changes, this will raiseKeyErrorand return a 500 error.Apply this diff to use safe defaults:
- data.pop("total_donations_amount") - data.pop("net_donations_amount") - data.pop("total_donations_count") - data.pop("total_protocol_fees") - data.pop("total_referrer_fees") + data.pop("total_donations_amount", None) + data.pop("net_donations_amount", None) + data.pop("total_donations_count", None) + data.pop("total_protocol_fees", None) + data.pop("total_referrer_fees", None)donations/migrations/0012_update_donations_net_amount.py (1)
6-16: Harden data migration: handle non‑numeric values, assign strings to CharFields, and iterate safely.This can crash on bad data and writes ints into a CharField. Add parsing guards, convert back to str, and iterate in chunks. Also consider rows where net_amount is NULL/blank, not just "0".
-def update_donations_net_amount(apps, schema_editor): - Donation = apps.get_model("donations", "Donation") - for donation in Donation.objects.filter(net_amount="0"): - total_amount = int(donation.total_amount) - protocol_fee = int(donation.protocol_fee) - referrer_fee = int(donation.referrer_fee or 0) - chef_fee = int(donation.chef_fee or 0) - - net_amount = total_amount - protocol_fee - referrer_fee - chef_fee - donation.net_amount = net_amount - donation.save() +def update_donations_net_amount(apps, schema_editor): + import logging + from decimal import Decimal, InvalidOperation + logger = logging.getLogger(__name__) + Donation = apps.get_model("donations", "Donation") + + qs = Donation.objects.filter(net_amount__in=["0", "", None]).only( + "id", "total_amount", "protocol_fee", "referrer_fee", "chef_fee", "net_amount" + ) + for donation in qs.iterator(chunk_size=1000): + try: + # Amount fields are stored as strings. Use Decimal for safety. + total_amount = Decimal(donation.total_amount or "0") + protocol_fee = Decimal(donation.protocol_fee or "0") + referrer_fee = Decimal(donation.referrer_fee or "0") + chef_fee = Decimal(donation.chef_fee or "0") + net_amount = total_amount - protocol_fee - referrer_fee - chef_fee + # Optional: prevent negatives if that’s not valid business logic + # net_amount = max(net_amount, Decimal(0)) + donation.net_amount = str(net_amount) # CharField expects string + donation.save(update_fields=["net_amount"]) + except (InvalidOperation, TypeError, ValueError) as e: + logger.error( + "Error updating donation %s: %s (total=%s, protocol=%s, referrer=%s, chef=%s)", + donation.id, e, donation.total_amount, donation.protocol_fee, donation.referrer_fee, donation.chef_fee, + )Also add an explicit reverse_code:
- migrations.RunPython(update_donations_net_amount), + migrations.RunPython(update_donations_net_amount, reverse_code=migrations.RunPython.noop),api/urls.py (2)
175-176: Incorrect URL name for reclaim request endpointRename from "stats_api" to a specific name.
- path("v1/reclaim/generate-request", ReclaimProofRequestView.as_view(), name="stats_api"), + path("v1/reclaim/generate-request", ReclaimProofRequestView.as_view(), name="reclaim_proof_request_api"),
178-186: Unify round routes: plural segment and consistent int converterMixing "v1/round" vs "v1/rounds" and int vs str for round_id will break reverse() and swagger stability. Standardize to plural + int everywhere; also nest votes under projects.
- path("v1/round/<int:round_id>/", RoundDetailAPI.as_view(), name="rounds_api_by_id"), - path("v1/round/<int:round_id>/<str:project_id>/votes", ProjectRoundVotesAPI.as_view(), name="project_round_votes_api_by_id"), - path("v1/rounds/<str:round_id>/applications", RoundApplicationsAPI.as_view(), name="rounds_applications_api"), + path("v1/rounds/<int:round_id>/", RoundDetailAPI.as_view(), name="rounds_api_by_id"), + path("v1/rounds/<int:round_id>/projects/<str:project_id>/votes", ProjectRoundVotesAPI.as_view(), name="project_round_votes_api_by_id"), + path("v1/rounds/<int:round_id>/applications", RoundApplicationsAPI.as_view(), name="rounds_applications_api"),Also applies to: 179-181
nadabot/migrations/0003_alter_group_rule_type_alter_group_rule_val_and_more.py (1)
42-49: CharField requires max_length (migration will fail without it)Add max_length or switch to TextField if size is unbounded. This was flagged previously.
Option A (constrained):
- field=models.CharField( + field=models.CharField( blank=True, help_text="Custom args as Base64VecU8.", null=True, + max_length=1024, verbose_name="custom args", ),Option B (unbounded data):
- field=models.CharField( + field=models.TextField( blank=True, help_text="Custom args as Base64VecU8.", null=True, verbose_name="custom args", ),campaigns/management/commands/populatecampaigndata.py (4)
66-67: Add timeouts (and prefer a retrying Session) to external calls.All
requests.getlack a timeout and retries, risking hangs and partial syncs.Apply minimal timeouts now; consider a shared Session with retries next:
- response = requests.get(url, params=params) + response = requests.get(url, params=params, timeout=10) - response = requests.get(url, params=params) + response = requests.get(url, params=params, timeout=10) - ft_metadata = requests.get(url) + ft_metadata = requests.get(url, timeout=10) - response = requests.get(url, params=params) + response = requests.get(url, params=params, timeout=10)If you want, I can follow up with an HTTPAdapter/Retry Session and replace these call sites.
Also applies to: 93-94, 133-135, 231-233
130-130: Use Django command I/O instead of- print(f"Created new token account: {token_acct}") + self.stdout.write(f"Created new token account: {token_acct}")
156-160: Use timezone-aware datetimes (UTC).Naive datetimes can break filtering/order. Convert ms timestamps to UTC-aware values.
-from datetime import datetime +from datetime import datetime +from django.utils import timezone - start_at = datetime.fromtimestamp(campaign_data["start_ms"] / 1000) + start_at = timezone.datetime.fromtimestamp(campaign_data["start_ms"] / 1000, tz=timezone.utc) end_at = None if campaign_data.get("end_ms"): - end_at = datetime.fromtimestamp(campaign_data["end_ms"] / 1000) - created_at = datetime.fromtimestamp(campaign_data["created_ms"] / 1000) + end_at = timezone.datetime.fromtimestamp(campaign_data["end_ms"] / 1000, tz=timezone.utc) + created_at = timezone.datetime.fromtimestamp(campaign_data["created_ms"] / 1000, tz=timezone.utc) - donated_at = datetime.fromtimestamp(donation_data["donated_at_ms"] / 1000) + donated_at = timezone.datetime.fromtimestamp(donation_data["donated_at_ms"] / 1000, tz=timezone.utc) returned_at = None if donation_data.get("returned_at_ms"): - returned_at = datetime.fromtimestamp(donation_data["returned_at_ms"] / 1000) + returned_at = timezone.datetime.fromtimestamp(donation_data["returned_at_ms"] / 1000, tz=timezone.utc)Also applies to: 281-286
112-120: Avoid hardcodedchain_id=1; make chain configurable or omit soAccount.savedefaults to NEAR.Hardcoding primary keys is brittle and may 404 if pk ≠ 1. Either pass nothing (letting
Account.save()set NEAR), or read a setting.- owner, _ = Account.objects.get_or_create( - defaults={"chain_id": 1}, - id=campaign_data["owner"] - ) + owner, _ = Account.objects.get_or_create(id=campaign_data["owner"]) - recipient, _ = Account.objects.get_or_create( - defaults={"chain_id": 1}, - id=campaign_data["recipient"] - ) + recipient, _ = Account.objects.get_or_create(id=campaign_data["recipient"]) - token_acct, token_acct_created = Account.objects.get_or_create(defaults={"chain_id":1},id=ft_id) + token_acct, token_acct_created = Account.objects.get_or_create(id=ft_id) - donor, _ = Account.objects.get_or_create( - defaults={"chain_id": 1}, - id=donation_data["donor_id"] - ) + donor, _ = Account.objects.get_or_create(id=donation_data["donor_id"]) - referrer, _ = Account.objects.get_or_create( - defaults={"chain_id": 1}, - id=donation_data["referrer_id"] - ) + referrer, _ = Account.objects.get_or_create(id=donation_data["referrer_id"])Also applies to: 125-126, 267-270
campaigns/tests.py (2)
106-107: Assert the specific DB error (IntegrityError) for unique constraint.+from django.db import IntegrityError @@ - with self.assertRaises(Exception): # Should raise IntegrityError + with self.assertRaises(IntegrityError): Campaign.objects.create(**campaign_data_2)
228-229: Assert the specific DB error (IntegrityError) for unique constraint.- with self.assertRaises(Exception): # Should raise IntegrityError + with self.assertRaises(IntegrityError): CampaignDonation.objects.create(**donation_data_2)campaigns/api.py (5)
31-34: Remove unused example import.- SIMPLE_CAMPAIGN_DONATION_EXAMPLE,
63-67: Fix status param description and typo in enum.- description="Filter by active campaigns (true/false)", + description="Filter by campaign status", required=False, type=str, - enum=["active", "upcoming", "ended", "unfufilled"], + enum=["active", "upcoming", "ended", "unfulfilled"],
107-111: Fix indentation in annotate block.- queryset = queryset.annotate( - cast_net_raised=Cast('net_raised_amount', FloatField()), - cast_max_amount=Cast('max_amount', FloatField()), - cast_target=Cast('target_amount', FloatField()) - ) + queryset = queryset.annotate( + cast_net_raised=Cast('net_raised_amount', FloatField()), + cast_max_amount=Cast('max_amount', FloatField()), + cast_target=Cast('target_amount', FloatField()), + )
127-131: Fix status comparison typo.- elif status == 'unfufilled': + elif status == 'unfulfilled':
350-369: Harden external config fetch: add timeout and validate JSON.- def get(self, request: Request, *args, **kwargs): + def get(self, request: Request, *args, **kwargs): """Get campaign contract configuration""" url = f"{settings.FASTNEAR_RPC_URL}/account/{CAMPAIGN_CONTRACT}/view/get_config" - response = requests.get(url) + response = requests.get(url, timeout=10) if response.status_code == 200: - data = response.json() + try: + data = response.json() + except ValueError as e: + logger.error(f"Failed to parse JSON from {CAMPAIGN_CONTRACT}: {e}") + return Response({"error": "Invalid JSON"}, status=500) + # Optional: validate shape + serializer = CampaignContractConfigSerializer(data=data) + if not serializer.is_valid(): + logger.error(f"Invalid config from {CAMPAIGN_CONTRACT}: {serializer.errors}") + return Response({"error": "Invalid configuration data"}, status=500) - return Response(data) + return Response(serializer.validated_data) else:lists/models.py (1)
92-95: Confirm backfill and runtime handling of uniqueness for (on_chain_id, chain).Enforcing unique_on_chain_id_per_chain is correct, but ensure migrations backfill and upstream code handles IntegrityError gracefully.
Would you like a data‑audit script to scan for duplicates pre‑migration?
grantpicks/serializers.py (2)
7-7: Remove unused import: Vote.Vote isn’t used in this module.
-from .models import Project, ProjectContact, ProjectContract, ProjectRepository, Round, Vote, VotePair +from .models import Project, ProjectContact, ProjectContract, ProjectRepository, Round, VotePair
78-81: Duplicate field; missing use_whitelist_application.Fix the fields list in RoundSerializer.
- "use_whitelist_voting", - "use_whitelist_voting", + "use_whitelist_voting", + "use_whitelist_application",indexer_app/handler.py (2)
1-1: Remove unused imports.Clean unused imports to reduce noise.
-import asyncio @@ -from django.core.cache import cache @@ -from stellar_sdk.soroban_rpc import GetEventsResponse @@ -from grantpicks.models import StellarEventAlso applies to: 10-10, 12-12, 16-16
128-129: Replace debug print with logger.Use structured logging.
- print("parsa parsa...", parsed_log) + logger.debug("Processing event: %s", event_name)indexer_app/tasks.py (4)
6-6: Prune unused imports (match previous review).Remove unused imports for cleanliness and faster startup.
-from pathlib import Path @@ -from celery.signals import task_revoked, worker_shutdown +from celery.signals import task_revoked @@ -from django.db.models import Count, DecimalField, Q, Sum, Value -from django.db.models.functions import Cast, NullIf +from django.db.models import Count, Q, Sum @@ -from stellar_sdk import Address, stellar_xdr, scval +from stellar_sdk import Address @@ -from grantpicks.models import Round, RoundDeposit, StellarEvent, ProjectContact +from grantpicks.models import StellarEvent @@ -from pots.models import Pot, PotApplication, PotApplicationStatus, PotPayout +from pots.models import Pot, PotPayoutAlso applies to: 11-11, 13-15, 18-19, 23-23, 25-26
109-114: Raise a specific, chained exception on stream stall.Replace bare Exception; preserve cause.
+class StreamStalled(Exception): + """NEAR lake stream stalled; restart worker.""" + pass @@ - except asyncio.TimeoutError: - logger.warning( - "Stream stalled: no new blocks within timeout, restarting..." - ) # raise Exception so sytemd can restart the worker - raise Exception("Stream stalled: restarting...") + except asyncio.TimeoutError as err: + logger.warning("Stream stalled: no new blocks within timeout; restarting...") + raise StreamStalled("Stream stalled: restarting") from err
239-253: Add timeout to HTTP request.Prevent hangs when fetching matching pool balance.
- response = requests.get(url) + response = requests.get(url, timeout=10)
223-236: String amounts in stats; consider numeric fields.Using isdigit() on Donation.total_amount indicates CharField storage. Prefer numeric DB types and remove string guards.
Would you like a migration to convert Donation.total_amount to Decimal/BigInteger and update call sites? I can generate it.
Also applies to: 260-266
grantpicks/models.py (2)
225-229: approved_projects type mismatch vs migrations (Account vs Project).Past review flagged migrations referencing grantpicks.Project while model uses Account, which will break migrations/runtime semantics. Decide one representation and align code/migrations.
Run to confirm:
#!/bin/bash rg -nP -C3 "approved_projects\s*=\s*models\.ManyToManyField" grantpicks rg -nP -C2 "approved_projects" grantpicks/migrationsIf you switch to Project, adjust utils (update_approved_projects, update_application) to add Project not owner Accounts.
352-371: Hardcoded token lookup; add None-safety and narrower exceptions.Use chain-derived token, handle None amounts, and avoid blind except.
def update_vault_usd_equivalent(self): - # first, see if there is a TokenHistoricalPrice within 1 day (or HISTORICAL_PRICE_QUERY_HOURS) of self.paid_at - try: - token = Token.objects.get(account_id="stellar") + try: + if not self.chain: + logger.warning(f"Round {self.id}: no chain; skipping USD valuation") + return + token_acct_id = self.chain.name.lower() + token = Token.objects.filter(account_id=token_acct_id).first() + if not token: + logger.warning(f"No token mapping for chain {self.chain.name}") + return price_usd = token.fetch_usd_prices_common(datetime.now()) if not price_usd: logger.info( f"No USD price found for token {token.symbol} at {datetime.now()}" ) return - self.vault_total_deposits_usd = ( - token.format_price(self.vault_total_deposits) * price_usd - ) - self.current_vault_balance_usd = ( - token.format_price(self.current_vault_balance) * price_usd - ) + vtd = token.format_price(self.vault_total_deposits or "0") * price_usd + cvb = token.format_price(self.current_vault_balance or "0") * price_usd + self.vault_total_deposits_usd = vtd + self.current_vault_balance_usd = cvb self.save() logger.info(f"Saved USD prices for round vault for round id: {self.id}") - except Exception as e: - logger.error(f"Failed to calculate and stellar vault USD prices: {e}") + except Token.DoesNotExist as e: + logger.error(f"Token lookup failed for chain {self.chain}: {e}") + except Exception: + logger.exception("Failed to calculate vault USD prices")
| now = timezone.now() | ||
| applications = PotApplication.objects.filter( | ||
| applicant=account, status=PotApplicationStatus.APPROVED | ||
| ) | ||
| pot_ids = applications.values_list("pot_id", flat=True) | ||
| pots = Pot.objects.filter(account__in=pot_ids) | ||
| if request.query_params.get("status") == "live": | ||
| pots = pots.filter( | ||
| matching_round_start__lte=now, matching_round_end__gte=now | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: wrong field used when fetching Pots; should filter by id, not account.
pot_ids contains Pot PKs; filtering Pot.account__in=pot_ids is incorrect and may return empty or wrong rows.
- pot_ids = applications.values_list("pot_id", flat=True)
- pots = Pot.objects.filter(account__in=pot_ids)
+ pot_ids = applications.values_list("pot_id", flat=True).distinct()
+ pots = (
+ Pot.objects.filter(id__in=pot_ids)
+ .select_related("deployer", "owner", "chef", "token")
+ .prefetch_related("admins")
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| now = timezone.now() | |
| applications = PotApplication.objects.filter( | |
| applicant=account, status=PotApplicationStatus.APPROVED | |
| ) | |
| pot_ids = applications.values_list("pot_id", flat=True) | |
| pots = Pot.objects.filter(account__in=pot_ids) | |
| if request.query_params.get("status") == "live": | |
| pots = pots.filter( | |
| matching_round_start__lte=now, matching_round_end__gte=now | |
| ) | |
| now = timezone.now() | |
| applications = PotApplication.objects.filter( | |
| applicant=account, status=PotApplicationStatus.APPROVED | |
| ) | |
| pot_ids = applications.values_list("pot_id", flat=True).distinct() | |
| pots = ( | |
| Pot.objects.filter(id__in=pot_ids) | |
| .select_related("deployer", "owner", "chef", "token") | |
| .prefetch_related("admins") | |
| ) | |
| if request.query_params.get("status") == "live": | |
| pots = pots.filter( | |
| matching_round_start__lte=now, matching_round_end__gte=now | |
| ) |
🤖 Prompt for AI Agents
In accounts/api.py around lines 205 to 214, the code uses
Pot.objects.filter(account__in=pot_ids) but pot_ids are Pot primary keys from
applications.values_list("pot_id", flat=True); change the filter to use the PK
field (e.g., Pot.objects.filter(id__in=pot_ids) or pk__in=pot_ids) so you're
selecting Pots by their IDs, not by account. Ensure pot_ids is passed directly
(it’s a queryset of IDs) and leave the subsequent "live" status filtering
unchanged.
| field=models.ForeignKey( | ||
| blank=True, | ||
| help_text="Blockchain this account is located on.", | ||
| null=True, | ||
| on_delete=django.db.models.deletion.CASCADE, | ||
| related_name="accounts", | ||
| related_query_name="account", | ||
| to="chains.chain", | ||
| ), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Risky cascade delete from Chain → Account; switch to PROTECT.
Deleting a Chain will delete all related Accounts. That’s high-risk, especially once chain becomes non-null in 0005.
Prefer PROTECT (or RESTRICT) to prevent accidental data loss.
If migrations aren’t applied yet, update here; otherwise add a follow-up AlterField:
- on_delete=django.db.models.deletion.CASCADE,
+ on_delete=django.db.models.deletion.PROTECT,Or as a new migration after 0005:
+from django.db import migrations, models
+import django.db.models.deletion
+
+class Migration(migrations.Migration):
+ dependencies = [("accounts", "0005_alter_account_chain")]
+ operations = [
+ migrations.AlterField(
+ model_name="account",
+ name="chain",
+ field=models.ForeignKey(
+ help_text="Blockchain this account is located on.",
+ on_delete=django.db.models.deletion.PROTECT,
+ related_name="accounts",
+ related_query_name="account",
+ to="chains.chain",
+ ),
+ ),
+ ]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| field=models.ForeignKey( | |
| blank=True, | |
| help_text="Blockchain this account is located on.", | |
| null=True, | |
| on_delete=django.db.models.deletion.CASCADE, | |
| related_name="accounts", | |
| related_query_name="account", | |
| to="chains.chain", | |
| ), | |
| field=models.ForeignKey( | |
| blank=True, | |
| help_text="Blockchain this account is located on.", | |
| null=True, | |
| on_delete=django.db.models.deletion.PROTECT, | |
| related_name="accounts", | |
| related_query_name="account", | |
| to="chains.chain", | |
| ), |
🤖 Prompt for AI Agents
In accounts/migrations/0004_account_chain.py around lines 18 to 26, the
ForeignKey for chain uses on_delete=django.db.models.deletion.CASCADE which will
delete Accounts when a Chain is removed; change the on_delete to
django.db.models.deletion.PROTECT (or models.PROTECT) to prevent cascade
deletes. If this migration has not been applied yet, update the on_delete value
directly in this migration; if it has been applied, create a new migration that
runs migrations.AlterField on the Account.chain field to set
on_delete=models.PROTECT (and import the appropriate PROTECT constant) so
existing databases are updated safely.
| field=models.ForeignKey( | ||
| help_text="Blockchain this account is located on.", | ||
| on_delete=django.db.models.deletion.CASCADE, | ||
| related_name="accounts", | ||
| related_query_name="account", | ||
| to="chains.chain", | ||
| ), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Making Account.chain NOT NULL needs a backfill first.
0004 allowed nulls; flipping to NOT NULL without a data migration will fail if any existing accounts have chain IS NULL.
Two safe options:
- Stage this migration to keep null=True, add a new migration that:
- RunPython to populate chain (by stable slug/name), then
- AlterField to null=False.
- Or, add a RunPython here to backfill before the AlterField to null=False.
Quick DB check:
If you want, I can draft the exact RunPython using your canonical default chain.
🏁 Script executed:
#!/bin/bash
psql "$DATABASE_URL" -c "SELECT COUNT(*) AS null_accounts FROM accounts_account WHERE chain_id IS NULL;"Length of output: 42
I'll verify the migration context by examining the migration files and model definition instead.
---
🏁 Script executed:
```shell
#!/bin/bash
# Find and examine the Account model definition
find . -type f -name "models.py" -path "*/accounts/*" -exec cat {} \;
Length of output: 6472
🏁 Script executed:
#!/bin/bash
# Examine migration 0004 to see what it did
find . -type f -path "*/accounts/migrations/0004_*.py" -exec cat {} \;Length of output: 763
🏁 Script executed:
#!/bin/bash
# Examine full migration 0005 file for context
find . -type f -path "*/accounts/migrations/0005_*.py" -exec cat {} \;Length of output: 709
🏁 Script executed:
#!/bin/bash
# Check if there are any migrations after 0005 that might handle this
find . -type f -path "*/accounts/migrations/*.py" | sortLength of output: 349
Migration 0005 requires backfilling NULL chain_id values before removing the null=True constraint.
The migration correctly identifies a real safety issue: migration 0004 adds chain with null=True, and migration 0005 immediately removes that, converting to null=False. If any Account records have chain_id IS NULL when 0005 executes, the constraint violation will fail the migration.
Application-level defaults in the save() method only apply to new instances; they don't backfill existing NULL records in the database.
Recommended fixes (in order of preference):
-
Add RunPython to 0005 before the AlterField to backfill NULL chains to a canonical default (e.g., NEAR):
def backfill_chain(apps, schema_editor): Account = apps.get_model('accounts', 'Account') Chain = apps.get_model('chains', 'Chain') near_chain = Chain.objects.get(name='NEAR') Account.objects.filter(chain_id__isnull=True).update(chain=near_chain)
-
Stage with separate migration: Keep 0005 as
null=True, create 0006 with RunPython backfill + AlterField tonull=False. -
If deploying to fresh schema with no existing accounts, document the assumption and ensure 0004 → 0005 run in same deployment before any accounts are created.
🤖 Prompt for AI Agents
In accounts/migrations/0005_alter_account_chain.py around lines 18 to 24, the
migration drops null=True on Account.chain but does not backfill existing NULL
chain_id values, which will cause the migration to fail if any Account rows have
chain_id IS NULL; add a RunPython data migration placed before the AlterField
that loads the historical Account and Chain models via apps.get_model, looks up
the canonical default Chain (e.g., name='NEAR') and runs
Account.objects.filter(chain_id__isnull=True).update(chain=near_chain) to set a
non-null value for all existing rows; alternatively implement this as a two-step
approach (keep null=True in 0005, create a new migration that backfills then
alters to null=False) and document the chosen deployment assumption if you
intend to require a fresh schema.
| url = f"{settings.FASTNEAR_RPC_URL}/account/{settings.NEAR_SOCIAL_CONTRACT_ADDRESS}/view/get" | ||
| keys_value = f'["{self.id}/profile/**"]' | ||
| params = {"keys.json": keys_value} | ||
| response = requests.get(url, params=params) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add timeout to all requests calls to prevent indefinite hangs.
All three requests calls (lines 77, 96, 112) lack timeout parameters, which could cause the application to hang indefinitely if the external service becomes unresponsive.
Apply this diff to add timeouts:
- response = requests.get(url, params=params)
+ response = requests.get(url, params=params, timeout=10)- response = requests.get(url)
+ response = requests.get(url, timeout=10) response = requests.post(
- url, json=json_data
+ url, json=json_data, timeout=10
)Also applies to: 96-96, 112-113
🧰 Tools
🪛 Ruff (0.14.1)
77-77: Probable use of requests call without timeout
(S113)
🤖 Prompt for AI Agents
In accounts/models.py around lines 77, 96, and 112-113, the
requests.get/requests.post calls lack a timeout which can cause indefinite
hangs; update each call to pass a timeout argument (preferably a small
reasonable default like timeout=5 or use a module-level/configurable constant
such as REQUEST_TIMEOUT) and handle timeout-related exceptions
(requests.Timeout, requests.RequestException) where appropriate to ensure the
code fails fast and logs or returns an error instead of hanging.
| def save(self, *args, **kwargs): | ||
| if self._state.adding: # If the account is being created (not updated) | ||
| if not self.chain_id: | ||
| # default to NEAR chain when none is provided | ||
| self.chain = Chain.objects.get(name="NEAR") | ||
| self.fetch_near_social_profile_data( | ||
| False # don't save yet as we want to avoid infinite loop | ||
| ) | ||
| super().save(*args, **kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocking external API call in save() is risky.
Calling fetch_near_social_profile_data() during save() makes every account creation synchronously wait for external NEAR API calls, which can:
- Significantly slow down account creation
- Cause saves to fail if the external service is down
- Block request threads in web contexts
Consider moving this to an async task (Celery) or calling it explicitly after save when needed.
Example pattern:
def save(self, *args, **kwargs):
if self._state.adding:
if not self.chain_id:
self.chain = Chain.objects.get(name="NEAR")
super().save(*args, **kwargs)
# Optionally trigger async task after save
# if self._state.adding:
# fetch_near_social_profile_data_task.delay(self.id)| SIMPLE_LIST_EXAMPLE = { | ||
| "id": 1, | ||
| "on_chain_id": 1, | ||
| "owner": SIMPLE_ACCOUNT_EXAMPLE, | ||
| "admins": [SIMPLE_ACCOUNT_EXAMPLE], | ||
| "name": "Potlock Public Goods Registry", | ||
| "description": "The official NEAR Protocol Public Goods Registry", | ||
| "cover_image_url": None, | ||
| "admin_only_registrations": False, | ||
| "default_registration_status": "Approved", | ||
| "created_at": "2024-03-27T15:24:46.104000Z", | ||
| "updated_at": "2024-04-30T19:00:51.002000Z", | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Example JSON omits 'chain' but serializer includes it.
To keep docs accurate, either remove 'chain' from ListSerializer fields or add it to the example payloads.
🤖 Prompt for AI Agents
In lists/serializers.py around lines 67 to 79, the example dict for
SIMPLE_LIST_EXAMPLE omits the 'chain' field while the ListSerializer includes
it; update the example to include a 'chain' entry matching the serializer shape
(use the existing SIMPLE_CHAIN_EXAMPLE constant if available, or add a minimal
chain dict/value consistent with other examples) so the example payload
accurately reflects the serializer fields.
| "blacklisted_accounts", | ||
| models.ManyToManyField( | ||
| help_text="registry blacklisted accounts.", | ||
| related_name="registry_blacklisted_in", | ||
| to="accounts.account", | ||
| ), | ||
| ), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Single source of truth for blacklists (use through model)
You have both NadabotRegistry.blacklisted_accounts M2M and a separate BlackList model with extra fields. This duplicates the relationship and risks drift. Use a through=BlackList M2M (or drop the plain M2M).
High-level migration edit:
- "blacklisted_accounts",
- models.ManyToManyField(
- help_text="registry blacklisted accounts.",
- related_name="registry_blacklisted_in",
- to="accounts.account",
- ),
+ "blacklisted_accounts",
+ models.ManyToManyField(
+ through="nadabot.BlackList",
+ help_text="registry blacklisted accounts.",
+ related_name="registry_blacklisted_in",
+ to="accounts.account",
+ ),And ensure BlackList is created before NadabotRegistry in operations (or reference via string). I can provide a full migration reorder if needed.
Also applies to: 371-420
| contract = models.ForeignKey( | ||
| Account, | ||
| on_delete=models.CASCADE, | ||
| related_name="provider_instances", | ||
| verbose_name=_("contract ID"), | ||
| max_length=100, | ||
| null=False, | ||
| help_text=_("Contract ID of the external contract that is the source of this provider.") | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ForeignKey does not accept max_length.
Remove invalid max_length arguments from contract/submitted_by FKs.
- contract = models.ForeignKey(
+ contract = models.ForeignKey(
Account,
on_delete=models.CASCADE,
related_name="provider_instances",
verbose_name=_("contract ID"),
- max_length=100,
null=False,
help_text=_("Contract ID of the external contract that is the source of this provider.")
)
@@
- submitted_by = models.ForeignKey(
+ submitted_by = models.ForeignKey(
Account,
on_delete=models.CASCADE,
related_name="providers_submitted",
verbose_name=_("submitted by"),
- max_length=100,
null=False,
help_text=_("User who submitted this provider.")
)Also applies to: 179-187
🤖 Prompt for AI Agents
In nadabot/models.py around lines 109-117 (and similarly at 179-187), the
ForeignKey fields include an invalid max_length argument; remove the max_length
parameter from the contract and submitted_by ForeignKey declarations, leaving
the rest of the options (on_delete, related_name, verbose_name, null, help_text)
intact; if you intended to constrain length, apply max_length on a CharField in
the related model instead; after editing, run makemigrations/migrate to update
the schema.
| status = models.CharField( | ||
| _("status"), | ||
| max_length=20, | ||
| choices=ProviderStatus, | ||
| null=False, | ||
| db_index=True, | ||
| help_text=_("Status of the provider.") | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use .choices for TextChoices enums.
Django expects choices=EnumClass.choices.
- status = models.CharField(
+ status = models.CharField(
_("status"),
max_length=20,
- choices=ProviderStatus,
+ choices=ProviderStatus.choices,
null=False,
db_index=True,
help_text=_("Status of the provider.")
)
@@
- rule_type = models.CharField(
+ rule_type = models.CharField(
_("rule type"),
- choices=RuleType,
+ choices=RuleType.choices,
max_length=100,
null=True,
blank=True,
help_text=_("The rule this group uses.")
)Also applies to: 253-260
🤖 Prompt for AI Agents
In nadabot/models.py around lines 136 to 143, the status field uses
choices=ProviderStatus but Django expects the .choices attribute from a
TextChoices enum; change choices=ProviderStatus to
choices=ProviderStatus.choices. Also apply the same fix to the other occurrence
at lines 253 to 260 so both fields use EnumClass.choices instead of the enum
class itself.
| BASE_PATTERN = ( | ||
| r"nadabot\.testnet" | ||
| if settings.ENVIRONMENT == "testnet" | ||
| else r"v\d+(?:new)?\.[a-zA-Z]+\.nadabot\.near" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Use logging + fullmatch; broaden account charset; precompile
Current pattern likely excludes valid NEAR labels (digits, _, -, .) and prints to stdout.
import re
+import logging
from django.conf import settings
+logger = logging.getLogger(__name__)
BASE_PATTERN = (
- r"nadabot\.testnet"
+ r"nadabot\.testnet"
if settings.ENVIRONMENT == "testnet"
-else r"v\d+(?:new)?\.[a-zA-Z]+\.nadabot\.near"
+ else r"v\d+(?:new)?\.[a-z0-9_.-]+\.nadabot\.near"
)
-def match_nadabot_registry_pattern(receiver):
+_COMPILED = re.compile(rf"^{BASE_PATTERN}$")
+
+def match_nadabot_registry_pattern(receiver: str) -> bool:
"""Matches nadabot subaccounts for registry."""
- pattern = f"^{BASE_PATTERN}$"
- print(F"the TLA {pattern}, {receiver}")
- return bool(re.match(pattern, receiver))
+ logger.debug("nadabot pattern=%s receiver=%s", _COMPILED.pattern, receiver)
+ return bool(_COMPILED.fullmatch(receiver))Also applies to: 12-16
🏁 Script executed:
#!/bin/bash
python - <<'PY'
import re
pat = re.compile(r'^v12new.alpha-1.nadabot.near$')
print("example_match:", bool(pat.match('v12new.alpha-1.nadabot.near')))
PYLength of output: 19
---
🏁 Script executed:
```shell
#!/bin/bash
# View the actual nadabot/utils.py file to see current code state
cat nadabot/utils.py 2>/dev/null || echo "File not found"
Length of output: 398
🏁 Script executed:
#!/bin/bash
# Test if original pattern rejects valid NEAR account names with digits/hyphens
python - <<'PY'
import re
# Original pattern from review
original = r"v\d+(?:new)?\.[a-zA-Z]+\.nadabot\.near"
# Suggested pattern
suggested = r"v\d+(?:new)?\.[a-z0-9_.-]+\.nadabot\.near"
test_cases = [
"v1new.alpha.nadabot.near", # original would match
"v1new.alpha-1.nadabot.near", # original would NOT match (has hyphen and digit)
"v1new.alpha_test.nadabot.near", # original would NOT match (has underscore)
"v1.beta.nadabot.near", # original would match
]
print("Testing ORIGINAL pattern:", original)
for tc in test_cases:
match_result = bool(re.match(f"^{original}$", tc))
print(f" {tc}: {match_result}")
print("\nTesting SUGGESTED pattern:", suggested)
for tc in test_cases:
match_result = bool(re.match(f"^{suggested}$", tc))
print(f" {tc}: {match_result}")
print("\nMatch vs Fullmatch difference:")
pat = re.compile(r"^v\d+(?:new)?\.[a-zA-Z]+\.nadabot\.near$")
test = "v1new.alpha.nadabot.near_extra"
print(f" match('{test}'): {bool(pat.match(test))}")
print(f" fullmatch('{test}'): {bool(pat.fullmatch(test))}")
PYLength of output: 540
Use logging instead of print; fix restrictive charset; precompile pattern
Pattern verification confirms the original regex [a-zA-Z]+ rejects valid NEAR account names containing digits, hyphens, and underscores (e.g., v1new.alpha-1.nadabot.near). The current print() call should be replaced with proper logging. These changes address both functional correctness and code quality.
import re
+import logging
from django.conf import settings
+logger = logging.getLogger(__name__)
BASE_PATTERN = (
r"nadabot\.testnet"
if settings.ENVIRONMENT == "testnet"
-else r"v\d+(?:new)?\.[a-zA-Z]+\.nadabot\.near"
+ else r"v\d+(?:new)?\.[a-z0-9_.-]+\.nadabot\.near"
)
+_COMPILED = re.compile(rf"^{BASE_PATTERN}$")
+
def match_nadabot_registry_pattern(receiver):
"""Matches nadabot subaccounts for registry."""
- pattern = f"^{BASE_PATTERN}$"
- print(F"the TLA {pattern}, {receiver}")
- return bool(re.match(pattern, receiver))
+ logger.debug("nadabot pattern=%s receiver=%s", _COMPILED.pattern, receiver)
+ return bool(_COMPILED.fullmatch(receiver))Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In nadabot/utils.py around lines 5 to 9, the regex and print usage are
problematic: the character class [a-zA-Z]+ is too restrictive (rejects digits,
hyphens, underscores) and using print is inappropriate; replace the charset with
one that allows lowercase letters, digits, hyphens and underscores (e.g.,
[a-z0-9_-]+) and make it case-insensitive if needed, precompile the final
pattern into a module-level compiled regex (re.compile(..., re.IGNORECASE) as
appropriate), and remove the print() call in favor of logging via the module
logger (logger.debug/info/error) so the pattern is validated/used with proper
logging instead of printing.
Summary by CodeRabbit
Release Notes
New Features
Improvements
Bug Fixes