Skip to content

Latest commit

 

History

History
83 lines (62 loc) · 2.86 KB

TODO.org

File metadata and controls

83 lines (62 loc) · 2.86 KB

DM-39183

fix unit test and GHA (checksanity step)

use memray to profile memory usage for transaction pods

Stop ingest with error if chunk queue is not empty or if there is running transactions for the database and document cleanup

Document ingest.yaml parameters

metadata_url ingest: async_proc_limit: 4 low_speed_limit: 10 low_speed_time: 3600

DM-36606

add ORDER BY in case03 itest queries

Document ingest.yaml + version change

Document repcli

Document qserv doc generation

Fritz will do that, decided on Jan 4th qserv build-docs –cmake –linkcheck –user=qserv –user-build-image qserv/lite-build-runner:2022.12.1-rc2 firefox build/doc/html/index.html

Document metadata.json version change

Add version:

and also in all json files

Add charset_name default to latin1

Move itest/README.md to official doc + link

Move IngestConfig to a dedicated config.py file

Create version.py to check metadata.json and ingest.yaml version

version ingest.yaml

See option to generate statistics with Gabrielle

https://confluence.lsstcorp.org/display/DM/Ingest%3A+4.+The+version+history+of+the+Ingest+API

chase non-deterministic benchmark bug

add count(*) to queries for all tables

Misc

Get chunk location once!

kind/CI

Argo/Config

See if example/config.yaml can be set in argo CLI (i.e. in params.yaml): https://argoproj.github.io/argo-workflows/examples/#parameters argo submit arguments-parameters.yaml –parameter-file params.yaml

In validate, SQLAlchemy has a weird bahavior:

UPDATE This seem to be fixed with SQLAlchemy v12 upgrade

Query: SELECT count(*) AS count_1 FROM dpdd_ref Query total time: 0.069575 Query result: 0

Result with regular mysql client is 37??

## two-mode:

  • crash on error, as before
  • continue at max: cancel ingest for chunks which produce some special error or have been ingested too much time without success.
  • Improve error recovery: if transaction fails, then check chunk queue state (non terminated tasks) before relaunching workflow and ask for chunk queue manuel cleanup
  • Improve management of connection parameters for input data
  • Improve argo-workflow install
  • Run as non root

## batch ingest