Skip to content

Tags: tensorajack/bitmind-subnet

Tags

v2.2.7

Toggle v2.2.7's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Release 2.2.7 (BitMind-AI#178)

* Validator Proxy Response Update (BitMind-AI#103)

* adding rich arg, adding coldkeys and hotokeys

* moving rich to payload from headers

* bump version

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Two new image models: SDXL finetuned on Midjourney, and SD finetuned on anime images

* Added required StableDiffusionPipeline import

* Updated transformers version to fix tokenizer initialization error

* GPU Specification (BitMind-AI#108)

* Made gpu id specification consistent across synthetic image generation models

* Changed gpu_id to device

* Docstring grammar

* add neuron.device to SyntheticImageGenerator init

* Fixed variable names

* adding device to start_validator.sh

* deprecating old/biased random prompt generation

* properly clear gpu of moderation pipeline

* simplifying usage of self.device

* fixing moderation pipeline device

* explicitly defining model/tokenizer for moderation pipeline to avoid accelerate auto device management

* deprecating random prompt generation

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Update __init__.py

bump version

* removing logging

* old logging removed

* adding check for state file in case it is deleted somehow

* removing remaining random prompt generation code

* [Testnet] Video Challenges V1 (BitMind-AI#111)

* simple video challenge implementation wip

* dummy multimodal miner

* constants reorg

* updating verify_models script with t2v

* fixing MODEL_PIPELINE init

* cleanup

* __init__.py

* hasattr fix

* num_frames must be divisible by 8

* fixing dict iteration

* dummy response for videos

* fixing small bugs

* fixing video logging and compression

* apply image transforms uniformly to frames of video

* transform list of tensor to pil for synapse prep

* cleaning up vali forward

* miner function signatures to use Synapse base class instead of ImageSynapse

* vali requirements imageio and moviepy

* attaching separate video and image forward functions

* separating blacklist and priority fns for image/video synapses

* pred -> prediction

* initial synth video challenge flow

* initial video cache implementation

* video cache cleanup

* video zip downloads

* wip fairly large refactor of data generation, functionality and form

* generalized hf zip download fn

* had claude improve video_cache formatting

* vali forward cleanup

* cleanup + turning back on randomness for real/fake

* fix relative import

* wip moving video datasets to vali config

* Adding optimization flags to vali config

* check if captioning model already loaded

* async SyntheticDataGenerator wip

* async zip download

* ImageCache wip

* proper gpu clearing for moderation pipeline

* sdg cleanup

* new cache system WIP

* image/video cache updates

* cleaning up unused metadata arg, improving logging

* fixed frame sampling, parquet image extraction, image sampling

* synth data cache wip

* Moving sgd to its own pm2 process

* synthetic data gen memory management update

* mochi-1-preview

* util cleanup, new requirements

* ensure SyntheticDataGenerator process waits for ImageCache to populate

* adding new t2i models from main

* Fixing t2v model output saving

* miner cleanup

* Moving tall model weights to bitmind hf org

* removing test video pkl

* fixing circular import

* updating usage of hf_hub_download according to some breaking huggingface_hub changes

* adding ffmpeg to vali reqs

* adding back in video models in async generation after testing

* renaming UCF directory to DFB, since it now contains TALL

* remaining renames for UCF -> DFB

* pyffmpegg

* video compatible data augmentations

* Default values for level, data_aug_params for failure case

* switching image challenges back on

* using sample variable to store data for all challenge types

* disabling sequential_cpu_offload for CogVideoX5b

* logging metadata fields to w&b

* log challenge metadata

* bump version

* adding context manager for generation w different dtypes

* variable name fix in ComposeWithTransforms

* fixing broken DFB stuff in tall_detector.py

* removing unnecessary logging

* fixing outdated variable names

* cache refactor; moving shared functionality to BaseCache

* finally automating w&b project setting

* improving logs

* improving validator forward structure

* detector ABC cleanup + function headers

* adding try except for miner performance history loading

* fixing import

* cleaning up vali logging

* pep8 formatting video_utils

* cleaning up start_validator.sh, starting validator process before data gen

* shortening vali challenge timer

* moving data generation management to its own script & added w&B logging

* run_data_generator.py

* fixing full_path variable name

* changing w&b name for data generator

* yaml > json gang

* simplifying ImageCache.sample to always return one sample

* adding option to skip a challenge if no data are available in cache

* adding config vars for image/video detector

* cleaning up miner class, moving blacklist/priority to base

* updating call to image_cache.sample()

* fixing mochi gen to 84 frames

* fixing video data padding for miners

* updating setup script to create new .env file

* fixing weight loading after detector refactor

* model/detector separation for TALL & modifying base DFB code to allow device configuration

* standardizing video detector input to a frames tensor

* separation of concerns; moving all video preprocessing to detector class

* pep8 cleanup

* reformatting if statements

* temporarily removing initial dataset class

* standardizing config loading across video and image models

* finished VideoDataloader and supporting components

* moved save config file out of trian script

* backwards compatibility for ucf training

* moving data augmentation from RealFakeDataset to Dataset subclasses for video aug support

* cleaning up data augmentation and target_image_size

* import cleanup

* gitignore update

* fixing typos picked up by flake8

* fixing function name ty flake8

* fixing test fixtures

* disabling pytests for now, some are broken after refactor and its 4am

* fixing image_size for augmentations

* Updated validator gpu requirements (BitMind-AI#113)

* splitting rewards over image and video (BitMind-AI#112)

* Update README.md (BitMind-AI#110)

* combining requirements files

* Combined requirements installation

* Improved formatting, added checks to prevent overwriting existing .env files.

* Re-added endpoint options

* Fixed incorrect diffusers install

* Fixed missing initialization of miner performance trackers

* [Testnet] Docs Updates (BitMind-AI#114)

* docs updates

* mining docs update

* Removed deprecated requirements files from github tests (BitMind-AI#118)

* [Testnet] Async Cache Updates (BitMind-AI#119)

* breaking out cache updates into their own process

* adding retries for loading vali info

* moving device config to data generation process

* typo

* removing old run_updater init arg, fixing dataset indexing

* only download 1 zip to start to provide data for vali on first boot

* cache deletion functionality

* log cache size

* name images with dataset prefix

* Increased minimum and recommended storage (BitMind-AI#120)

* [Testnet] Data download cleanup (BitMind-AI#121)

* moving download_data.py to base_miner/datasets

* removing unused args in download_data

* constants -> config

* docs updates for new paths

* updating outdated fn headers

* pep8

* use png codec, sample by framerate + num frames

* fps, min_fps, max_fps parameterization of sample

* return fps and num frames

* Fix registry module imports (BitMind-AI#123)

* Fix registry module imports

* Fixing config loading issues

* fixing frame sampling

* bugfix

* print label on testnet

* reenabling model verification

* update detector class names

* Fixing config_name arg for camo

* fixing detector config in camo

* fixing ref to self.config_name

* udpate default frame rate

* vidoe dataset creation example

* default config for video datasets

* update default num_videosg

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Update README.md

* README title

* removing samples from cache

* README

* fixing cache removal (BitMind-AI#125)

* Fixed tensor not being set to device for video challenges, causing errors when using cuda (BitMind-AI#126)

* Mainnet Prep (BitMind-AI#127)

* resetting challenge timer to 60s

* fix logging for miner history loading

* randomize model order, log gen time

* remove frame limit

* separate logging to after data check

* generate with batch=1 first for diverse data availability

* load v1 history path for smooth transition to new incentive

* prune extracted cache

* swapping url open-images for jpg

* removing unused config args

* shortening cache refresh timer

* cache optimizations

* typo

* better variable naming

* default to autocast

* log num files in cache along with GB

* surfacing max size gb variables

* cooked typo

* Fixed wrong validation split key string causing no transform to be applied

* Changed detector arg to be required

* fixing hotkey reset check

* removing logline

* clamp mcc at 0 so video doesn't negatively impact performant image miners

* typo

* improving cache logs

* prune after clear

* only update relevant tracker in reward

* improved logging, turned off cache removal in sample()

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* removign old reqs from autoupdate

* Re-added bitmind HF org prefix to dataset path

* shortening self heal timer

* autoupdate

* autoupdate

* sample size

* Validator Improvements: VRAM usage, logging (BitMind-AI#131)

* ensure vali process and cache update process do not consume any vram

* skip challenge if unable to create wandb Image/Video object (indicating corrupt file)

* manually set log level to info

* removing debug print

* enable_info in config

* cleanup

* version bump

* moved info log setting to config.py

* Bittensor 8.5.1 (BitMind-AI#133)

* bittensor 8.5.1

* bump package versoin

* Prompt Generation Pipeline Improvements (BitMind-AI#135)

* Release 2.0.3 (BitMind-AI#134)

Bittensor 8.5.1

* enhancing prompts by adding conveyed motion with llama

* Mining docs fix setup_miner_env.sh -> setup_env.sh

* [testnet] I2i/in painting (BitMind-AI#137)

* Initial i2i constants for in-painting

* Initial in painting functionality with mask (oval/rectangle) and annotation generation

* Refactor ipg to match sdg format, added caching and support for selecting from multiple in-painting models

* Fixed cache import, updated test script

* Separate cache for i2i when using run_data_generator

* Renamed synth cache constants, added support for multiple validator synth caches, and selection between i2i (20%) and t2i (80%) in forward

* Unifying InPaintingGenerator and SyntheticDataGenerator (BitMind-AI#136)

* WIP, unifying InpaintingGenerator and SyntheticDataGenerator

* minor simplification of forward flow

* simplifying forward flow

* standardizing cache structures with the introduction of task type subdirs

* adding i2i models to batch generation

* removing depracted InPaintingGenerator from run script

* adding --clear-cache option for validator

* updating SDG init params

* fixing last imports + directory structure references

* fixing images passed to generate function for i2i

* option to log masks/original images for i2i challenges

* fixing help hint for output-dir

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Updated image_annotation_generator to prompt_generator (BitMind-AI#138)

* bump version 2.0.3 -> 2.1.0

* testing cache clearing via autoupdate

* cranking up video rewards to .2

* Add DeepFloyd/IF model and multi-stage pipeline support

Added DeepFloyd/IF-I-XL + IF-II-L model configuration, pipeline_stages configuration for multi-stage models

* Moved multistage pipeline generator to SyntheticDataGenerator

* Args for testing specific model

* [TESTNET] HunyuanVideo (BitMind-AI#140)

* hunyuan video initial commit

* delete resolution from from_pretrained_args after extracting h,w

* model_id arg for from_pretrained

* standardizing model_id usage

* fixing autocast and torch_dtype for hunyuan

* adding resolution options and save options for all t2v models

* missing comma in config

* Update __init__.py

* updated subnet arch diagram

* README wip

* docs udpates

* README updates

* README updates

* more README udpates

* README updates

* README udpates

* README cleanup

* more README updates

* Fixing table border removal html for github

* fixing table html

* one last attempt at a prettier table

* one last last attempt at a prettier table

* bumping video rewards

* removing decay for unsampled miners

* README cleanup

* increasing suggested and min compute for validators

* README update, markdown fix in Incentive.md

* README tweak

* removing redundant dereg check from update_scores

* Deepfloyed specific configs, args for better cache/data gen testing, multistage pipeline i/o

* use largest deepfloyed-if I and II models, ensure no watermarker

* Fixed FLUX resolution format, added back model_id and scheduler loading for video models

* Add Janus-Pro-7B t2i model with custom diffuser pipeline class

* Janus repo install

* Removed custom wrapper files, added Janus DiffusionPipeline wrapper to model_utils, cleaned up configs

* Removed DiffusionPipeline import

* Uncomment wandb inits

* Move create_pipeline_generator() to model utils

* Moved model optimizations to model utils

* [Testnet] Mutli-Video Challenges (BitMind-AI#148)

* Implementation of frame stitching for 2 videos

* ComposeWithParams fix

* vflip + hflip fix

* wandb video logging fix courtesy of eric

* proper arg passing for prompt moderation

* version bump

* i2i crop guardrails

* Update config.py

Removing problematic resolution for CogVideoX5b

* explicit requirements install

* moving pm2 process stopping prior to model verification

* fix for no available vidoes in multi-video challenge generation

* Update forward.py

Mutli-video threshold 0.2

* [Testnet] Multiclass Rewards (BitMind-AI#150)

* multiclass protocols

* multiclass rewards

* facilitating smooth transition from old protocol to multiclass

* DTAO: Bittensor SDK 9.0.0 (BitMind-AI#152)

* Update requirements.txt

* version bump

* moving prediction backwards compatibility to synapase.deserialize

* mcc-based reward with rational transform

* cast predictions to np array upon loading miner history

* version bump

* [Testnet] video organics (BitMind-AI#151)

* improved vali proxy with video endpoint

* renaming endpoints

* Fixing vali proxy initialization

* make vali proxy async again

* handling testnet situation of low miner activity

* BytesIO import

* upgrading transformers

* switching to multipart form data

* Validator Proxy handling of Multiclass Responses (BitMind-AI#153)

* udpate vali proxy to return floats instead of vectors

* removing rational transform for now

* new incentive docs (BitMind-AI#154)

* python-multipart

* Semisynthetic Cache (BitMind-AI#158)

* new cache structure and related config vars

* refactored vai forward to be more modular

* cleanup

* restructure wip

* added dataset to cache dir hierachy, cleaned up data classes, better error reporting for missing frames

* fixing cache access order

* bugfixes for semisynthetic image cache and safer pruning

* config and logging cleanup

* cache clear for this release

---------

Co-authored-by: Dylan Uys <dylan.uys@gmai.com>

* version bump

* Changing mutliclass reward weight to .25

* uncommenting dlib

* bittensor==9.0.3

* Handling datasets with few files that don't need regular local updates

* fixing error logging variable names

* dreamshaper-8-inpainting (BitMind-AI#161)

* Vali/sd v15 inpainting (BitMind-AI#162)

* inpainting pipeline import

* removing cache clear from autoupdate

* eidon data

* version bump

* removing filetype key from bm-eidon-image

* refresh cache

* [Testnet] Broken Pipes Fix (BitMind-AI#166)

* improved dendrite class with proper connection pool management to deal w these pesky broken pipes

* logging and indendation

* version bump

* updating connection pool config

* removing cache clear

* reward transform + logging updates

* Added LORA model support

* Added JourneyDB static synthetic dataset

* Added GenImage Midjourney synthetic image dataset

* Fixed dataset name

* Added import for SYNTH_IMAGE_CACHE_DIR

* Typo

* merging scores fix to testnet

* fix for augmented video logging (BitMind-AI#177)

* wanbd cleanup

* logging update, testing autoupdate

* making wandb cache clears periodic regardless of autoupdate/self heal

* move wandb cache cleaning to its own dir

* typo

* bump version

---------

Co-authored-by: benliang99 <caliangben@gmail.com>
Co-authored-by: Andrew <caliangandrew@gmail.com>
Co-authored-by: Kenobi <108417131+kenobijon@users.noreply.github.com>
Co-authored-by: Dylan Uys <dylan.uys@gmai.com>

v2.2.6

Toggle v2.2.6's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Release 2.2.6 (BitMind-AI#172)

* Validator Proxy Response Update (BitMind-AI#103)

* adding rich arg, adding coldkeys and hotokeys

* moving rich to payload from headers

* bump version

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Two new image models: SDXL finetuned on Midjourney, and SD finetuned on anime images

* Added required StableDiffusionPipeline import

* Updated transformers version to fix tokenizer initialization error

* GPU Specification (BitMind-AI#108)

* Made gpu id specification consistent across synthetic image generation models

* Changed gpu_id to device

* Docstring grammar

* add neuron.device to SyntheticImageGenerator init

* Fixed variable names

* adding device to start_validator.sh

* deprecating old/biased random prompt generation

* properly clear gpu of moderation pipeline

* simplifying usage of self.device

* fixing moderation pipeline device

* explicitly defining model/tokenizer for moderation pipeline to avoid accelerate auto device management

* deprecating random prompt generation

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Update __init__.py

bump version

* removing logging

* old logging removed

* adding check for state file in case it is deleted somehow

* removing remaining random prompt generation code

* [Testnet] Video Challenges V1 (BitMind-AI#111)

* simple video challenge implementation wip

* dummy multimodal miner

* constants reorg

* updating verify_models script with t2v

* fixing MODEL_PIPELINE init

* cleanup

* __init__.py

* hasattr fix

* num_frames must be divisible by 8

* fixing dict iteration

* dummy response for videos

* fixing small bugs

* fixing video logging and compression

* apply image transforms uniformly to frames of video

* transform list of tensor to pil for synapse prep

* cleaning up vali forward

* miner function signatures to use Synapse base class instead of ImageSynapse

* vali requirements imageio and moviepy

* attaching separate video and image forward functions

* separating blacklist and priority fns for image/video synapses

* pred -> prediction

* initial synth video challenge flow

* initial video cache implementation

* video cache cleanup

* video zip downloads

* wip fairly large refactor of data generation, functionality and form

* generalized hf zip download fn

* had claude improve video_cache formatting

* vali forward cleanup

* cleanup + turning back on randomness for real/fake

* fix relative import

* wip moving video datasets to vali config

* Adding optimization flags to vali config

* check if captioning model already loaded

* async SyntheticDataGenerator wip

* async zip download

* ImageCache wip

* proper gpu clearing for moderation pipeline

* sdg cleanup

* new cache system WIP

* image/video cache updates

* cleaning up unused metadata arg, improving logging

* fixed frame sampling, parquet image extraction, image sampling

* synth data cache wip

* Moving sgd to its own pm2 process

* synthetic data gen memory management update

* mochi-1-preview

* util cleanup, new requirements

* ensure SyntheticDataGenerator process waits for ImageCache to populate

* adding new t2i models from main

* Fixing t2v model output saving

* miner cleanup

* Moving tall model weights to bitmind hf org

* removing test video pkl

* fixing circular import

* updating usage of hf_hub_download according to some breaking huggingface_hub changes

* adding ffmpeg to vali reqs

* adding back in video models in async generation after testing

* renaming UCF directory to DFB, since it now contains TALL

* remaining renames for UCF -> DFB

* pyffmpegg

* video compatible data augmentations

* Default values for level, data_aug_params for failure case

* switching image challenges back on

* using sample variable to store data for all challenge types

* disabling sequential_cpu_offload for CogVideoX5b

* logging metadata fields to w&b

* log challenge metadata

* bump version

* adding context manager for generation w different dtypes

* variable name fix in ComposeWithTransforms

* fixing broken DFB stuff in tall_detector.py

* removing unnecessary logging

* fixing outdated variable names

* cache refactor; moving shared functionality to BaseCache

* finally automating w&b project setting

* improving logs

* improving validator forward structure

* detector ABC cleanup + function headers

* adding try except for miner performance history loading

* fixing import

* cleaning up vali logging

* pep8 formatting video_utils

* cleaning up start_validator.sh, starting validator process before data gen

* shortening vali challenge timer

* moving data generation management to its own script & added w&B logging

* run_data_generator.py

* fixing full_path variable name

* changing w&b name for data generator

* yaml > json gang

* simplifying ImageCache.sample to always return one sample

* adding option to skip a challenge if no data are available in cache

* adding config vars for image/video detector

* cleaning up miner class, moving blacklist/priority to base

* updating call to image_cache.sample()

* fixing mochi gen to 84 frames

* fixing video data padding for miners

* updating setup script to create new .env file

* fixing weight loading after detector refactor

* model/detector separation for TALL & modifying base DFB code to allow device configuration

* standardizing video detector input to a frames tensor

* separation of concerns; moving all video preprocessing to detector class

* pep8 cleanup

* reformatting if statements

* temporarily removing initial dataset class

* standardizing config loading across video and image models

* finished VideoDataloader and supporting components

* moved save config file out of trian script

* backwards compatibility for ucf training

* moving data augmentation from RealFakeDataset to Dataset subclasses for video aug support

* cleaning up data augmentation and target_image_size

* import cleanup

* gitignore update

* fixing typos picked up by flake8

* fixing function name ty flake8

* fixing test fixtures

* disabling pytests for now, some are broken after refactor and its 4am

* fixing image_size for augmentations

* Updated validator gpu requirements (BitMind-AI#113)

* splitting rewards over image and video (BitMind-AI#112)

* Update README.md (BitMind-AI#110)

* combining requirements files

* Combined requirements installation

* Improved formatting, added checks to prevent overwriting existing .env files.

* Re-added endpoint options

* Fixed incorrect diffusers install

* Fixed missing initialization of miner performance trackers

* [Testnet] Docs Updates (BitMind-AI#114)

* docs updates

* mining docs update

* Removed deprecated requirements files from github tests (BitMind-AI#118)

* [Testnet] Async Cache Updates (BitMind-AI#119)

* breaking out cache updates into their own process

* adding retries for loading vali info

* moving device config to data generation process

* typo

* removing old run_updater init arg, fixing dataset indexing

* only download 1 zip to start to provide data for vali on first boot

* cache deletion functionality

* log cache size

* name images with dataset prefix

* Increased minimum and recommended storage (BitMind-AI#120)

* [Testnet] Data download cleanup (BitMind-AI#121)

* moving download_data.py to base_miner/datasets

* removing unused args in download_data

* constants -> config

* docs updates for new paths

* updating outdated fn headers

* pep8

* use png codec, sample by framerate + num frames

* fps, min_fps, max_fps parameterization of sample

* return fps and num frames

* Fix registry module imports (BitMind-AI#123)

* Fix registry module imports

* Fixing config loading issues

* fixing frame sampling

* bugfix

* print label on testnet

* reenabling model verification

* update detector class names

* Fixing config_name arg for camo

* fixing detector config in camo

* fixing ref to self.config_name

* udpate default frame rate

* vidoe dataset creation example

* default config for video datasets

* update default num_videosg

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Update README.md

* README title

* removing samples from cache

* README

* fixing cache removal (BitMind-AI#125)

* Fixed tensor not being set to device for video challenges, causing errors when using cuda (BitMind-AI#126)

* Mainnet Prep (BitMind-AI#127)

* resetting challenge timer to 60s

* fix logging for miner history loading

* randomize model order, log gen time

* remove frame limit

* separate logging to after data check

* generate with batch=1 first for diverse data availability

* load v1 history path for smooth transition to new incentive

* prune extracted cache

* swapping url open-images for jpg

* removing unused config args

* shortening cache refresh timer

* cache optimizations

* typo

* better variable naming

* default to autocast

* log num files in cache along with GB

* surfacing max size gb variables

* cooked typo

* Fixed wrong validation split key string causing no transform to be applied

* Changed detector arg to be required

* fixing hotkey reset check

* removing logline

* clamp mcc at 0 so video doesn't negatively impact performant image miners

* typo

* improving cache logs

* prune after clear

* only update relevant tracker in reward

* improved logging, turned off cache removal in sample()

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* removign old reqs from autoupdate

* Re-added bitmind HF org prefix to dataset path

* shortening self heal timer

* autoupdate

* autoupdate

* sample size

* Validator Improvements: VRAM usage, logging (BitMind-AI#131)

* ensure vali process and cache update process do not consume any vram

* skip challenge if unable to create wandb Image/Video object (indicating corrupt file)

* manually set log level to info

* removing debug print

* enable_info in config

* cleanup

* version bump

* moved info log setting to config.py

* Bittensor 8.5.1 (BitMind-AI#133)

* bittensor 8.5.1

* bump package versoin

* Prompt Generation Pipeline Improvements (BitMind-AI#135)

* Release 2.0.3 (BitMind-AI#134)

Bittensor 8.5.1

* enhancing prompts by adding conveyed motion with llama

* Mining docs fix setup_miner_env.sh -> setup_env.sh

* [testnet] I2i/in painting (BitMind-AI#137)

* Initial i2i constants for in-painting

* Initial in painting functionality with mask (oval/rectangle) and annotation generation

* Refactor ipg to match sdg format, added caching and support for selecting from multiple in-painting models

* Fixed cache import, updated test script

* Separate cache for i2i when using run_data_generator

* Renamed synth cache constants, added support for multiple validator synth caches, and selection between i2i (20%) and t2i (80%) in forward

* Unifying InPaintingGenerator and SyntheticDataGenerator (BitMind-AI#136)

* WIP, unifying InpaintingGenerator and SyntheticDataGenerator

* minor simplification of forward flow

* simplifying forward flow

* standardizing cache structures with the introduction of task type subdirs

* adding i2i models to batch generation

* removing depracted InPaintingGenerator from run script

* adding --clear-cache option for validator

* updating SDG init params

* fixing last imports + directory structure references

* fixing images passed to generate function for i2i

* option to log masks/original images for i2i challenges

* fixing help hint for output-dir

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Updated image_annotation_generator to prompt_generator (BitMind-AI#138)

* bump version 2.0.3 -> 2.1.0

* testing cache clearing via autoupdate

* cranking up video rewards to .2

* Add DeepFloyd/IF model and multi-stage pipeline support

Added DeepFloyd/IF-I-XL + IF-II-L model configuration, pipeline_stages configuration for multi-stage models

* Moved multistage pipeline generator to SyntheticDataGenerator

* Args for testing specific model

* [TESTNET] HunyuanVideo (BitMind-AI#140)

* hunyuan video initial commit

* delete resolution from from_pretrained_args after extracting h,w

* model_id arg for from_pretrained

* standardizing model_id usage

* fixing autocast and torch_dtype for hunyuan

* adding resolution options and save options for all t2v models

* missing comma in config

* Update __init__.py

* updated subnet arch diagram

* README wip

* docs udpates

* README updates

* README updates

* more README udpates

* README updates

* README udpates

* README cleanup

* more README updates

* Fixing table border removal html for github

* fixing table html

* one last attempt at a prettier table

* one last last attempt at a prettier table

* bumping video rewards

* removing decay for unsampled miners

* README cleanup

* increasing suggested and min compute for validators

* README update, markdown fix in Incentive.md

* README tweak

* removing redundant dereg check from update_scores

* Deepfloyed specific configs, args for better cache/data gen testing, multistage pipeline i/o

* use largest deepfloyed-if I and II models, ensure no watermarker

* Fixed FLUX resolution format, added back model_id and scheduler loading for video models

* Add Janus-Pro-7B t2i model with custom diffuser pipeline class

* Janus repo install

* Removed custom wrapper files, added Janus DiffusionPipeline wrapper to model_utils, cleaned up configs

* Removed DiffusionPipeline import

* Uncomment wandb inits

* Move create_pipeline_generator() to model utils

* Moved model optimizations to model utils

* [Testnet] Mutli-Video Challenges (BitMind-AI#148)

* Implementation of frame stitching for 2 videos

* ComposeWithParams fix

* vflip + hflip fix

* wandb video logging fix courtesy of eric

* proper arg passing for prompt moderation

* version bump

* i2i crop guardrails

* Update config.py

Removing problematic resolution for CogVideoX5b

* explicit requirements install

* moving pm2 process stopping prior to model verification

* fix for no available vidoes in multi-video challenge generation

* Update forward.py

Mutli-video threshold 0.2

* [Testnet] Multiclass Rewards (BitMind-AI#150)

* multiclass protocols

* multiclass rewards

* facilitating smooth transition from old protocol to multiclass

* DTAO: Bittensor SDK 9.0.0 (BitMind-AI#152)

* Update requirements.txt

* version bump

* moving prediction backwards compatibility to synapase.deserialize

* mcc-based reward with rational transform

* cast predictions to np array upon loading miner history

* version bump

* [Testnet] video organics (BitMind-AI#151)

* improved vali proxy with video endpoint

* renaming endpoints

* Fixing vali proxy initialization

* make vali proxy async again

* handling testnet situation of low miner activity

* BytesIO import

* upgrading transformers

* switching to multipart form data

* Validator Proxy handling of Multiclass Responses (BitMind-AI#153)

* udpate vali proxy to return floats instead of vectors

* removing rational transform for now

* new incentive docs (BitMind-AI#154)

* python-multipart

* Semisynthetic Cache (BitMind-AI#158)

* new cache structure and related config vars

* refactored vai forward to be more modular

* cleanup

* restructure wip

* added dataset to cache dir hierachy, cleaned up data classes, better error reporting for missing frames

* fixing cache access order

* bugfixes for semisynthetic image cache and safer pruning

* config and logging cleanup

* cache clear for this release

---------

Co-authored-by: Dylan Uys <dylan.uys@gmai.com>

* version bump

* Changing mutliclass reward weight to .25

* uncommenting dlib

* bittensor==9.0.3

* Handling datasets with few files that don't need regular local updates

* fixing error logging variable names

* dreamshaper-8-inpainting (BitMind-AI#161)

* Vali/sd v15 inpainting (BitMind-AI#162)

* inpainting pipeline import

* removing cache clear from autoupdate

* eidon data

* version bump

* removing filetype key from bm-eidon-image

* refresh cache

* [Testnet] Broken Pipes Fix (BitMind-AI#166)

* improved dendrite class with proper connection pool management to deal w these pesky broken pipes

* logging and indendation

* version bump

* updating connection pool config

* removing cache clear

* reward transform + logging updates

* Added LORA model support

* Added JourneyDB static synthetic dataset

* Added GenImage Midjourney synthetic image dataset

* Fixed dataset name

* Added import for SYNTH_IMAGE_CACHE_DIR

* Typo

* merging scores fix to testnet

---------

Co-authored-by: Dylan Uys <dylan.uys@gmail.com>
Co-authored-by: Andrew <caliangandrew@gmail.com>
Co-authored-by: Kenobi <108417131+kenobijon@users.noreply.github.com>
Co-authored-by: Dylan Uys <dylan.uys@gmai.com>

v2.2.4

Toggle v2.2.4's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Release 2.2.4 (BitMind-AI#167)

* Validator Proxy Response Update (BitMind-AI#103)

* adding rich arg, adding coldkeys and hotokeys

* moving rich to payload from headers

* bump version

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Two new image models: SDXL finetuned on Midjourney, and SD finetuned on anime images

* Added required StableDiffusionPipeline import

* Updated transformers version to fix tokenizer initialization error

* GPU Specification (BitMind-AI#108)

* Made gpu id specification consistent across synthetic image generation models

* Changed gpu_id to device

* Docstring grammar

* add neuron.device to SyntheticImageGenerator init

* Fixed variable names

* adding device to start_validator.sh

* deprecating old/biased random prompt generation

* properly clear gpu of moderation pipeline

* simplifying usage of self.device

* fixing moderation pipeline device

* explicitly defining model/tokenizer for moderation pipeline to avoid accelerate auto device management

* deprecating random prompt generation

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Update __init__.py

bump version

* removing logging

* old logging removed

* adding check for state file in case it is deleted somehow

* removing remaining random prompt generation code

* [Testnet] Video Challenges V1 (BitMind-AI#111)

* simple video challenge implementation wip

* dummy multimodal miner

* constants reorg

* updating verify_models script with t2v

* fixing MODEL_PIPELINE init

* cleanup

* __init__.py

* hasattr fix

* num_frames must be divisible by 8

* fixing dict iteration

* dummy response for videos

* fixing small bugs

* fixing video logging and compression

* apply image transforms uniformly to frames of video

* transform list of tensor to pil for synapse prep

* cleaning up vali forward

* miner function signatures to use Synapse base class instead of ImageSynapse

* vali requirements imageio and moviepy

* attaching separate video and image forward functions

* separating blacklist and priority fns for image/video synapses

* pred -> prediction

* initial synth video challenge flow

* initial video cache implementation

* video cache cleanup

* video zip downloads

* wip fairly large refactor of data generation, functionality and form

* generalized hf zip download fn

* had claude improve video_cache formatting

* vali forward cleanup

* cleanup + turning back on randomness for real/fake

* fix relative import

* wip moving video datasets to vali config

* Adding optimization flags to vali config

* check if captioning model already loaded

* async SyntheticDataGenerator wip

* async zip download

* ImageCache wip

* proper gpu clearing for moderation pipeline

* sdg cleanup

* new cache system WIP

* image/video cache updates

* cleaning up unused metadata arg, improving logging

* fixed frame sampling, parquet image extraction, image sampling

* synth data cache wip

* Moving sgd to its own pm2 process

* synthetic data gen memory management update

* mochi-1-preview

* util cleanup, new requirements

* ensure SyntheticDataGenerator process waits for ImageCache to populate

* adding new t2i models from main

* Fixing t2v model output saving

* miner cleanup

* Moving tall model weights to bitmind hf org

* removing test video pkl

* fixing circular import

* updating usage of hf_hub_download according to some breaking huggingface_hub changes

* adding ffmpeg to vali reqs

* adding back in video models in async generation after testing

* renaming UCF directory to DFB, since it now contains TALL

* remaining renames for UCF -> DFB

* pyffmpegg

* video compatible data augmentations

* Default values for level, data_aug_params for failure case

* switching image challenges back on

* using sample variable to store data for all challenge types

* disabling sequential_cpu_offload for CogVideoX5b

* logging metadata fields to w&b

* log challenge metadata

* bump version

* adding context manager for generation w different dtypes

* variable name fix in ComposeWithTransforms

* fixing broken DFB stuff in tall_detector.py

* removing unnecessary logging

* fixing outdated variable names

* cache refactor; moving shared functionality to BaseCache

* finally automating w&b project setting

* improving logs

* improving validator forward structure

* detector ABC cleanup + function headers

* adding try except for miner performance history loading

* fixing import

* cleaning up vali logging

* pep8 formatting video_utils

* cleaning up start_validator.sh, starting validator process before data gen

* shortening vali challenge timer

* moving data generation management to its own script & added w&B logging

* run_data_generator.py

* fixing full_path variable name

* changing w&b name for data generator

* yaml > json gang

* simplifying ImageCache.sample to always return one sample

* adding option to skip a challenge if no data are available in cache

* adding config vars for image/video detector

* cleaning up miner class, moving blacklist/priority to base

* updating call to image_cache.sample()

* fixing mochi gen to 84 frames

* fixing video data padding for miners

* updating setup script to create new .env file

* fixing weight loading after detector refactor

* model/detector separation for TALL & modifying base DFB code to allow device configuration

* standardizing video detector input to a frames tensor

* separation of concerns; moving all video preprocessing to detector class

* pep8 cleanup

* reformatting if statements

* temporarily removing initial dataset class

* standardizing config loading across video and image models

* finished VideoDataloader and supporting components

* moved save config file out of trian script

* backwards compatibility for ucf training

* moving data augmentation from RealFakeDataset to Dataset subclasses for video aug support

* cleaning up data augmentation and target_image_size

* import cleanup

* gitignore update

* fixing typos picked up by flake8

* fixing function name ty flake8

* fixing test fixtures

* disabling pytests for now, some are broken after refactor and its 4am

* fixing image_size for augmentations

* Updated validator gpu requirements (BitMind-AI#113)

* splitting rewards over image and video (BitMind-AI#112)

* Update README.md (BitMind-AI#110)

* combining requirements files

* Combined requirements installation

* Improved formatting, added checks to prevent overwriting existing .env files.

* Re-added endpoint options

* Fixed incorrect diffusers install

* Fixed missing initialization of miner performance trackers

* [Testnet] Docs Updates (BitMind-AI#114)

* docs updates

* mining docs update

* Removed deprecated requirements files from github tests (BitMind-AI#118)

* [Testnet] Async Cache Updates (BitMind-AI#119)

* breaking out cache updates into their own process

* adding retries for loading vali info

* moving device config to data generation process

* typo

* removing old run_updater init arg, fixing dataset indexing

* only download 1 zip to start to provide data for vali on first boot

* cache deletion functionality

* log cache size

* name images with dataset prefix

* Increased minimum and recommended storage (BitMind-AI#120)

* [Testnet] Data download cleanup (BitMind-AI#121)

* moving download_data.py to base_miner/datasets

* removing unused args in download_data

* constants -> config

* docs updates for new paths

* updating outdated fn headers

* pep8

* use png codec, sample by framerate + num frames

* fps, min_fps, max_fps parameterization of sample

* return fps and num frames

* Fix registry module imports (BitMind-AI#123)

* Fix registry module imports

* Fixing config loading issues

* fixing frame sampling

* bugfix

* print label on testnet

* reenabling model verification

* update detector class names

* Fixing config_name arg for camo

* fixing detector config in camo

* fixing ref to self.config_name

* udpate default frame rate

* vidoe dataset creation example

* default config for video datasets

* update default num_videosg

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Update README.md

* README title

* removing samples from cache

* README

* fixing cache removal (BitMind-AI#125)

* Fixed tensor not being set to device for video challenges, causing errors when using cuda (BitMind-AI#126)

* Mainnet Prep (BitMind-AI#127)

* resetting challenge timer to 60s

* fix logging for miner history loading

* randomize model order, log gen time

* remove frame limit

* separate logging to after data check

* generate with batch=1 first for diverse data availability

* load v1 history path for smooth transition to new incentive

* prune extracted cache

* swapping url open-images for jpg

* removing unused config args

* shortening cache refresh timer

* cache optimizations

* typo

* better variable naming

* default to autocast

* log num files in cache along with GB

* surfacing max size gb variables

* cooked typo

* Fixed wrong validation split key string causing no transform to be applied

* Changed detector arg to be required

* fixing hotkey reset check

* removing logline

* clamp mcc at 0 so video doesn't negatively impact performant image miners

* typo

* improving cache logs

* prune after clear

* only update relevant tracker in reward

* improved logging, turned off cache removal in sample()

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* removign old reqs from autoupdate

* Re-added bitmind HF org prefix to dataset path

* shortening self heal timer

* autoupdate

* autoupdate

* sample size

* Validator Improvements: VRAM usage, logging (BitMind-AI#131)

* ensure vali process and cache update process do not consume any vram

* skip challenge if unable to create wandb Image/Video object (indicating corrupt file)

* manually set log level to info

* removing debug print

* enable_info in config

* cleanup

* version bump

* moved info log setting to config.py

* Bittensor 8.5.1 (BitMind-AI#133)

* bittensor 8.5.1

* bump package versoin

* Prompt Generation Pipeline Improvements (BitMind-AI#135)

* Release 2.0.3 (BitMind-AI#134)

Bittensor 8.5.1

* enhancing prompts by adding conveyed motion with llama

* Mining docs fix setup_miner_env.sh -> setup_env.sh

* [testnet] I2i/in painting (BitMind-AI#137)

* Initial i2i constants for in-painting

* Initial in painting functionality with mask (oval/rectangle) and annotation generation

* Refactor ipg to match sdg format, added caching and support for selecting from multiple in-painting models

* Fixed cache import, updated test script

* Separate cache for i2i when using run_data_generator

* Renamed synth cache constants, added support for multiple validator synth caches, and selection between i2i (20%) and t2i (80%) in forward

* Unifying InPaintingGenerator and SyntheticDataGenerator (BitMind-AI#136)

* WIP, unifying InpaintingGenerator and SyntheticDataGenerator

* minor simplification of forward flow

* simplifying forward flow

* standardizing cache structures with the introduction of task type subdirs

* adding i2i models to batch generation

* removing depracted InPaintingGenerator from run script

* adding --clear-cache option for validator

* updating SDG init params

* fixing last imports + directory structure references

* fixing images passed to generate function for i2i

* option to log masks/original images for i2i challenges

* fixing help hint for output-dir

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Updated image_annotation_generator to prompt_generator (BitMind-AI#138)

* bump version 2.0.3 -> 2.1.0

* testing cache clearing via autoupdate

* cranking up video rewards to .2

* Add DeepFloyd/IF model and multi-stage pipeline support

Added DeepFloyd/IF-I-XL + IF-II-L model configuration, pipeline_stages configuration for multi-stage models

* Moved multistage pipeline generator to SyntheticDataGenerator

* Args for testing specific model

* [TESTNET] HunyuanVideo (BitMind-AI#140)

* hunyuan video initial commit

* delete resolution from from_pretrained_args after extracting h,w

* model_id arg for from_pretrained

* standardizing model_id usage

* fixing autocast and torch_dtype for hunyuan

* adding resolution options and save options for all t2v models

* missing comma in config

* Update __init__.py

* updated subnet arch diagram

* README wip

* docs udpates

* README updates

* README updates

* more README udpates

* README updates

* README udpates

* README cleanup

* more README updates

* Fixing table border removal html for github

* fixing table html

* one last attempt at a prettier table

* one last last attempt at a prettier table

* bumping video rewards

* removing decay for unsampled miners

* README cleanup

* increasing suggested and min compute for validators

* README update, markdown fix in Incentive.md

* README tweak

* removing redundant dereg check from update_scores

* Deepfloyed specific configs, args for better cache/data gen testing, multistage pipeline i/o

* use largest deepfloyed-if I and II models, ensure no watermarker

* Fixed FLUX resolution format, added back model_id and scheduler loading for video models

* Add Janus-Pro-7B t2i model with custom diffuser pipeline class

* Janus repo install

* Removed custom wrapper files, added Janus DiffusionPipeline wrapper to model_utils, cleaned up configs

* Removed DiffusionPipeline import

* Uncomment wandb inits

* Move create_pipeline_generator() to model utils

* Moved model optimizations to model utils

* [Testnet] Mutli-Video Challenges (BitMind-AI#148)

* Implementation of frame stitching for 2 videos

* ComposeWithParams fix

* vflip + hflip fix

* wandb video logging fix courtesy of eric

* proper arg passing for prompt moderation

* version bump

* i2i crop guardrails

* Update config.py

Removing problematic resolution for CogVideoX5b

* explicit requirements install

* moving pm2 process stopping prior to model verification

* fix for no available vidoes in multi-video challenge generation

* Update forward.py

Mutli-video threshold 0.2

* [Testnet] Multiclass Rewards (BitMind-AI#150)

* multiclass protocols

* multiclass rewards

* facilitating smooth transition from old protocol to multiclass

* DTAO: Bittensor SDK 9.0.0 (BitMind-AI#152)

* Update requirements.txt

* version bump

* moving prediction backwards compatibility to synapase.deserialize

* mcc-based reward with rational transform

* cast predictions to np array upon loading miner history

* version bump

* [Testnet] video organics (BitMind-AI#151)

* improved vali proxy with video endpoint

* renaming endpoints

* Fixing vali proxy initialization

* make vali proxy async again

* handling testnet situation of low miner activity

* BytesIO import

* upgrading transformers

* switching to multipart form data

* Validator Proxy handling of Multiclass Responses (BitMind-AI#153)

* udpate vali proxy to return floats instead of vectors

* removing rational transform for now

* new incentive docs (BitMind-AI#154)

* python-multipart

* Semisynthetic Cache (BitMind-AI#158)

* new cache structure and related config vars

* refactored vai forward to be more modular

* cleanup

* restructure wip

* added dataset to cache dir hierachy, cleaned up data classes, better error reporting for missing frames

* fixing cache access order

* bugfixes for semisynthetic image cache and safer pruning

* config and logging cleanup

* cache clear for this release

---------

Co-authored-by: Dylan Uys <dylan.uys@gmai.com>

* version bump

* Changing mutliclass reward weight to .25

* uncommenting dlib

* bittensor==9.0.3

* Handling datasets with few files that don't need regular local updates

* fixing error logging variable names

* dreamshaper-8-inpainting (BitMind-AI#161)

* Vali/sd v15 inpainting (BitMind-AI#162)

* inpainting pipeline import

* removing cache clear from autoupdate

* eidon data

* version bump

* removing filetype key from bm-eidon-image

* refresh cache

* [Testnet] Broken Pipes Fix (BitMind-AI#166)

* improved dendrite class with proper connection pool management to deal w these pesky broken pipes

* logging and indendation

* version bump

* updating connection pool config

* removing cache clear

* reward transform + logging updates

---------

Co-authored-by: benliang99 <caliangben@gmail.com>
Co-authored-by: Andrew <caliangandrew@gmail.com>
Co-authored-by: Kenobi <108417131+kenobijon@users.noreply.github.com>
Co-authored-by: Dylan Uys <dylan.uys@gmai.com>

v2.2.3

Toggle v2.2.3's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Release 2.2.3 (BitMind-AI#164)

* Validator Proxy Response Update (BitMind-AI#103)

* adding rich arg, adding coldkeys and hotokeys

* moving rich to payload from headers

* bump version

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Two new image models: SDXL finetuned on Midjourney, and SD finetuned on anime images

* Added required StableDiffusionPipeline import

* Updated transformers version to fix tokenizer initialization error

* GPU Specification (BitMind-AI#108)

* Made gpu id specification consistent across synthetic image generation models

* Changed gpu_id to device

* Docstring grammar

* add neuron.device to SyntheticImageGenerator init

* Fixed variable names

* adding device to start_validator.sh

* deprecating old/biased random prompt generation

* properly clear gpu of moderation pipeline

* simplifying usage of self.device

* fixing moderation pipeline device

* explicitly defining model/tokenizer for moderation pipeline to avoid accelerate auto device management

* deprecating random prompt generation

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Update __init__.py

bump version

* removing logging

* old logging removed

* adding check for state file in case it is deleted somehow

* removing remaining random prompt generation code

* [Testnet] Video Challenges V1 (BitMind-AI#111)

* simple video challenge implementation wip

* dummy multimodal miner

* constants reorg

* updating verify_models script with t2v

* fixing MODEL_PIPELINE init

* cleanup

* __init__.py

* hasattr fix

* num_frames must be divisible by 8

* fixing dict iteration

* dummy response for videos

* fixing small bugs

* fixing video logging and compression

* apply image transforms uniformly to frames of video

* transform list of tensor to pil for synapse prep

* cleaning up vali forward

* miner function signatures to use Synapse base class instead of ImageSynapse

* vali requirements imageio and moviepy

* attaching separate video and image forward functions

* separating blacklist and priority fns for image/video synapses

* pred -> prediction

* initial synth video challenge flow

* initial video cache implementation

* video cache cleanup

* video zip downloads

* wip fairly large refactor of data generation, functionality and form

* generalized hf zip download fn

* had claude improve video_cache formatting

* vali forward cleanup

* cleanup + turning back on randomness for real/fake

* fix relative import

* wip moving video datasets to vali config

* Adding optimization flags to vali config

* check if captioning model already loaded

* async SyntheticDataGenerator wip

* async zip download

* ImageCache wip

* proper gpu clearing for moderation pipeline

* sdg cleanup

* new cache system WIP

* image/video cache updates

* cleaning up unused metadata arg, improving logging

* fixed frame sampling, parquet image extraction, image sampling

* synth data cache wip

* Moving sgd to its own pm2 process

* synthetic data gen memory management update

* mochi-1-preview

* util cleanup, new requirements

* ensure SyntheticDataGenerator process waits for ImageCache to populate

* adding new t2i models from main

* Fixing t2v model output saving

* miner cleanup

* Moving tall model weights to bitmind hf org

* removing test video pkl

* fixing circular import

* updating usage of hf_hub_download according to some breaking huggingface_hub changes

* adding ffmpeg to vali reqs

* adding back in video models in async generation after testing

* renaming UCF directory to DFB, since it now contains TALL

* remaining renames for UCF -> DFB

* pyffmpegg

* video compatible data augmentations

* Default values for level, data_aug_params for failure case

* switching image challenges back on

* using sample variable to store data for all challenge types

* disabling sequential_cpu_offload for CogVideoX5b

* logging metadata fields to w&b

* log challenge metadata

* bump version

* adding context manager for generation w different dtypes

* variable name fix in ComposeWithTransforms

* fixing broken DFB stuff in tall_detector.py

* removing unnecessary logging

* fixing outdated variable names

* cache refactor; moving shared functionality to BaseCache

* finally automating w&b project setting

* improving logs

* improving validator forward structure

* detector ABC cleanup + function headers

* adding try except for miner performance history loading

* fixing import

* cleaning up vali logging

* pep8 formatting video_utils

* cleaning up start_validator.sh, starting validator process before data gen

* shortening vali challenge timer

* moving data generation management to its own script & added w&B logging

* run_data_generator.py

* fixing full_path variable name

* changing w&b name for data generator

* yaml > json gang

* simplifying ImageCache.sample to always return one sample

* adding option to skip a challenge if no data are available in cache

* adding config vars for image/video detector

* cleaning up miner class, moving blacklist/priority to base

* updating call to image_cache.sample()

* fixing mochi gen to 84 frames

* fixing video data padding for miners

* updating setup script to create new .env file

* fixing weight loading after detector refactor

* model/detector separation for TALL & modifying base DFB code to allow device configuration

* standardizing video detector input to a frames tensor

* separation of concerns; moving all video preprocessing to detector class

* pep8 cleanup

* reformatting if statements

* temporarily removing initial dataset class

* standardizing config loading across video and image models

* finished VideoDataloader and supporting components

* moved save config file out of trian script

* backwards compatibility for ucf training

* moving data augmentation from RealFakeDataset to Dataset subclasses for video aug support

* cleaning up data augmentation and target_image_size

* import cleanup

* gitignore update

* fixing typos picked up by flake8

* fixing function name ty flake8

* fixing test fixtures

* disabling pytests for now, some are broken after refactor and its 4am

* fixing image_size for augmentations

* Updated validator gpu requirements (BitMind-AI#113)

* splitting rewards over image and video (BitMind-AI#112)

* Update README.md (BitMind-AI#110)

* combining requirements files

* Combined requirements installation

* Improved formatting, added checks to prevent overwriting existing .env files.

* Re-added endpoint options

* Fixed incorrect diffusers install

* Fixed missing initialization of miner performance trackers

* [Testnet] Docs Updates (BitMind-AI#114)

* docs updates

* mining docs update

* Removed deprecated requirements files from github tests (BitMind-AI#118)

* [Testnet] Async Cache Updates (BitMind-AI#119)

* breaking out cache updates into their own process

* adding retries for loading vali info

* moving device config to data generation process

* typo

* removing old run_updater init arg, fixing dataset indexing

* only download 1 zip to start to provide data for vali on first boot

* cache deletion functionality

* log cache size

* name images with dataset prefix

* Increased minimum and recommended storage (BitMind-AI#120)

* [Testnet] Data download cleanup (BitMind-AI#121)

* moving download_data.py to base_miner/datasets

* removing unused args in download_data

* constants -> config

* docs updates for new paths

* updating outdated fn headers

* pep8

* use png codec, sample by framerate + num frames

* fps, min_fps, max_fps parameterization of sample

* return fps and num frames

* Fix registry module imports (BitMind-AI#123)

* Fix registry module imports

* Fixing config loading issues

* fixing frame sampling

* bugfix

* print label on testnet

* reenabling model verification

* update detector class names

* Fixing config_name arg for camo

* fixing detector config in camo

* fixing ref to self.config_name

* udpate default frame rate

* vidoe dataset creation example

* default config for video datasets

* update default num_videosg

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Update README.md

* README title

* removing samples from cache

* README

* fixing cache removal (BitMind-AI#125)

* Fixed tensor not being set to device for video challenges, causing errors when using cuda (BitMind-AI#126)

* Mainnet Prep (BitMind-AI#127)

* resetting challenge timer to 60s

* fix logging for miner history loading

* randomize model order, log gen time

* remove frame limit

* separate logging to after data check

* generate with batch=1 first for diverse data availability

* load v1 history path for smooth transition to new incentive

* prune extracted cache

* swapping url open-images for jpg

* removing unused config args

* shortening cache refresh timer

* cache optimizations

* typo

* better variable naming

* default to autocast

* log num files in cache along with GB

* surfacing max size gb variables

* cooked typo

* Fixed wrong validation split key string causing no transform to be applied

* Changed detector arg to be required

* fixing hotkey reset check

* removing logline

* clamp mcc at 0 so video doesn't negatively impact performant image miners

* typo

* improving cache logs

* prune after clear

* only update relevant tracker in reward

* improved logging, turned off cache removal in sample()

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* removign old reqs from autoupdate

* Re-added bitmind HF org prefix to dataset path

* shortening self heal timer

* autoupdate

* autoupdate

* sample size

* Validator Improvements: VRAM usage, logging (BitMind-AI#131)

* ensure vali process and cache update process do not consume any vram

* skip challenge if unable to create wandb Image/Video object (indicating corrupt file)

* manually set log level to info

* removing debug print

* enable_info in config

* cleanup

* version bump

* moved info log setting to config.py

* Bittensor 8.5.1 (BitMind-AI#133)

* bittensor 8.5.1

* bump package versoin

* Prompt Generation Pipeline Improvements (BitMind-AI#135)

* Release 2.0.3 (BitMind-AI#134)

Bittensor 8.5.1

* enhancing prompts by adding conveyed motion with llama

* Mining docs fix setup_miner_env.sh -> setup_env.sh

* [testnet] I2i/in painting (BitMind-AI#137)

* Initial i2i constants for in-painting

* Initial in painting functionality with mask (oval/rectangle) and annotation generation

* Refactor ipg to match sdg format, added caching and support for selecting from multiple in-painting models

* Fixed cache import, updated test script

* Separate cache for i2i when using run_data_generator

* Renamed synth cache constants, added support for multiple validator synth caches, and selection between i2i (20%) and t2i (80%) in forward

* Unifying InPaintingGenerator and SyntheticDataGenerator (BitMind-AI#136)

* WIP, unifying InpaintingGenerator and SyntheticDataGenerator

* minor simplification of forward flow

* simplifying forward flow

* standardizing cache structures with the introduction of task type subdirs

* adding i2i models to batch generation

* removing depracted InPaintingGenerator from run script

* adding --clear-cache option for validator

* updating SDG init params

* fixing last imports + directory structure references

* fixing images passed to generate function for i2i

* option to log masks/original images for i2i challenges

* fixing help hint for output-dir

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Updated image_annotation_generator to prompt_generator (BitMind-AI#138)

* bump version 2.0.3 -> 2.1.0

* testing cache clearing via autoupdate

* cranking up video rewards to .2

* Add DeepFloyd/IF model and multi-stage pipeline support

Added DeepFloyd/IF-I-XL + IF-II-L model configuration, pipeline_stages configuration for multi-stage models

* Moved multistage pipeline generator to SyntheticDataGenerator

* Args for testing specific model

* [TESTNET] HunyuanVideo (BitMind-AI#140)

* hunyuan video initial commit

* delete resolution from from_pretrained_args after extracting h,w

* model_id arg for from_pretrained

* standardizing model_id usage

* fixing autocast and torch_dtype for hunyuan

* adding resolution options and save options for all t2v models

* missing comma in config

* Update __init__.py

* updated subnet arch diagram

* README wip

* docs udpates

* README updates

* README updates

* more README udpates

* README updates

* README udpates

* README cleanup

* more README updates

* Fixing table border removal html for github

* fixing table html

* one last attempt at a prettier table

* one last last attempt at a prettier table

* bumping video rewards

* removing decay for unsampled miners

* README cleanup

* increasing suggested and min compute for validators

* README update, markdown fix in Incentive.md

* README tweak

* removing redundant dereg check from update_scores

* Deepfloyed specific configs, args for better cache/data gen testing, multistage pipeline i/o

* use largest deepfloyed-if I and II models, ensure no watermarker

* Fixed FLUX resolution format, added back model_id and scheduler loading for video models

* Add Janus-Pro-7B t2i model with custom diffuser pipeline class

* Janus repo install

* Removed custom wrapper files, added Janus DiffusionPipeline wrapper to model_utils, cleaned up configs

* Removed DiffusionPipeline import

* Uncomment wandb inits

* Move create_pipeline_generator() to model utils

* Moved model optimizations to model utils

* [Testnet] Mutli-Video Challenges (BitMind-AI#148)

* Implementation of frame stitching for 2 videos

* ComposeWithParams fix

* vflip + hflip fix

* wandb video logging fix courtesy of eric

* proper arg passing for prompt moderation

* version bump

* i2i crop guardrails

* Update config.py

Removing problematic resolution for CogVideoX5b

* explicit requirements install

* moving pm2 process stopping prior to model verification

* fix for no available vidoes in multi-video challenge generation

* Update forward.py

Mutli-video threshold 0.2

* [Testnet] Multiclass Rewards (BitMind-AI#150)

* multiclass protocols

* multiclass rewards

* facilitating smooth transition from old protocol to multiclass

* DTAO: Bittensor SDK 9.0.0 (BitMind-AI#152)

* Update requirements.txt

* version bump

* moving prediction backwards compatibility to synapase.deserialize

* mcc-based reward with rational transform

* cast predictions to np array upon loading miner history

* version bump

* [Testnet] video organics (BitMind-AI#151)

* improved vali proxy with video endpoint

* renaming endpoints

* Fixing vali proxy initialization

* make vali proxy async again

* handling testnet situation of low miner activity

* BytesIO import

* upgrading transformers

* switching to multipart form data

* Validator Proxy handling of Multiclass Responses (BitMind-AI#153)

* udpate vali proxy to return floats instead of vectors

* removing rational transform for now

* new incentive docs (BitMind-AI#154)

* python-multipart

* Semisynthetic Cache (BitMind-AI#158)

* new cache structure and related config vars

* refactored vai forward to be more modular

* cleanup

* restructure wip

* added dataset to cache dir hierachy, cleaned up data classes, better error reporting for missing frames

* fixing cache access order

* bugfixes for semisynthetic image cache and safer pruning

* config and logging cleanup

* cache clear for this release

---------

Co-authored-by: Dylan Uys <dylan.uys@gmai.com>

* version bump

* Changing mutliclass reward weight to .25

* uncommenting dlib

* bittensor==9.0.3

* Handling datasets with few files that don't need regular local updates

* fixing error logging variable names

* dreamshaper-8-inpainting (BitMind-AI#161)

* Vali/sd v15 inpainting (BitMind-AI#162)

* inpainting pipeline import

* removing cache clear from autoupdate

* eidon data

* version bump

* removing filetype key from bm-eidon-image

* refresh cache

---------

Co-authored-by: benliang99 <caliangben@gmail.com>
Co-authored-by: Andrew <caliangandrew@gmail.com>
Co-authored-by: Kenobi <108417131+kenobijon@users.noreply.github.com>
Co-authored-by: Dylan Uys <dylan.uys@gmai.com>

v2.2.2

Toggle v2.2.2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Validator Cache Distribution (BitMind-AI#160)

* adding subdir for generated data, adding uniform distribution over cache data dirs

* create model output dir in sdg

* version bump

* cleaned up subnet diagram

* logging update

---------

Co-authored-by: Dylan Uys <dylan.uys@gmai.com>

v2.2.1

Toggle v2.2.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Release 2.2.1 (BitMind-AI#159)

* Validator Proxy Response Update (BitMind-AI#103)

* adding rich arg, adding coldkeys and hotokeys

* moving rich to payload from headers

* bump version

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Two new image models: SDXL finetuned on Midjourney, and SD finetuned on anime images

* Added required StableDiffusionPipeline import

* Updated transformers version to fix tokenizer initialization error

* GPU Specification (BitMind-AI#108)

* Made gpu id specification consistent across synthetic image generation models

* Changed gpu_id to device

* Docstring grammar

* add neuron.device to SyntheticImageGenerator init

* Fixed variable names

* adding device to start_validator.sh

* deprecating old/biased random prompt generation

* properly clear gpu of moderation pipeline

* simplifying usage of self.device

* fixing moderation pipeline device

* explicitly defining model/tokenizer for moderation pipeline to avoid accelerate auto device management

* deprecating random prompt generation

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Update __init__.py

bump version

* removing logging

* old logging removed

* adding check for state file in case it is deleted somehow

* removing remaining random prompt generation code

* [Testnet] Video Challenges V1 (BitMind-AI#111)

* simple video challenge implementation wip

* dummy multimodal miner

* constants reorg

* updating verify_models script with t2v

* fixing MODEL_PIPELINE init

* cleanup

* __init__.py

* hasattr fix

* num_frames must be divisible by 8

* fixing dict iteration

* dummy response for videos

* fixing small bugs

* fixing video logging and compression

* apply image transforms uniformly to frames of video

* transform list of tensor to pil for synapse prep

* cleaning up vali forward

* miner function signatures to use Synapse base class instead of ImageSynapse

* vali requirements imageio and moviepy

* attaching separate video and image forward functions

* separating blacklist and priority fns for image/video synapses

* pred -> prediction

* initial synth video challenge flow

* initial video cache implementation

* video cache cleanup

* video zip downloads

* wip fairly large refactor of data generation, functionality and form

* generalized hf zip download fn

* had claude improve video_cache formatting

* vali forward cleanup

* cleanup + turning back on randomness for real/fake

* fix relative import

* wip moving video datasets to vali config

* Adding optimization flags to vali config

* check if captioning model already loaded

* async SyntheticDataGenerator wip

* async zip download

* ImageCache wip

* proper gpu clearing for moderation pipeline

* sdg cleanup

* new cache system WIP

* image/video cache updates

* cleaning up unused metadata arg, improving logging

* fixed frame sampling, parquet image extraction, image sampling

* synth data cache wip

* Moving sgd to its own pm2 process

* synthetic data gen memory management update

* mochi-1-preview

* util cleanup, new requirements

* ensure SyntheticDataGenerator process waits for ImageCache to populate

* adding new t2i models from main

* Fixing t2v model output saving

* miner cleanup

* Moving tall model weights to bitmind hf org

* removing test video pkl

* fixing circular import

* updating usage of hf_hub_download according to some breaking huggingface_hub changes

* adding ffmpeg to vali reqs

* adding back in video models in async generation after testing

* renaming UCF directory to DFB, since it now contains TALL

* remaining renames for UCF -> DFB

* pyffmpegg

* video compatible data augmentations

* Default values for level, data_aug_params for failure case

* switching image challenges back on

* using sample variable to store data for all challenge types

* disabling sequential_cpu_offload for CogVideoX5b

* logging metadata fields to w&b

* log challenge metadata

* bump version

* adding context manager for generation w different dtypes

* variable name fix in ComposeWithTransforms

* fixing broken DFB stuff in tall_detector.py

* removing unnecessary logging

* fixing outdated variable names

* cache refactor; moving shared functionality to BaseCache

* finally automating w&b project setting

* improving logs

* improving validator forward structure

* detector ABC cleanup + function headers

* adding try except for miner performance history loading

* fixing import

* cleaning up vali logging

* pep8 formatting video_utils

* cleaning up start_validator.sh, starting validator process before data gen

* shortening vali challenge timer

* moving data generation management to its own script & added w&B logging

* run_data_generator.py

* fixing full_path variable name

* changing w&b name for data generator

* yaml > json gang

* simplifying ImageCache.sample to always return one sample

* adding option to skip a challenge if no data are available in cache

* adding config vars for image/video detector

* cleaning up miner class, moving blacklist/priority to base

* updating call to image_cache.sample()

* fixing mochi gen to 84 frames

* fixing video data padding for miners

* updating setup script to create new .env file

* fixing weight loading after detector refactor

* model/detector separation for TALL & modifying base DFB code to allow device configuration

* standardizing video detector input to a frames tensor

* separation of concerns; moving all video preprocessing to detector class

* pep8 cleanup

* reformatting if statements

* temporarily removing initial dataset class

* standardizing config loading across video and image models

* finished VideoDataloader and supporting components

* moved save config file out of trian script

* backwards compatibility for ucf training

* moving data augmentation from RealFakeDataset to Dataset subclasses for video aug support

* cleaning up data augmentation and target_image_size

* import cleanup

* gitignore update

* fixing typos picked up by flake8

* fixing function name ty flake8

* fixing test fixtures

* disabling pytests for now, some are broken after refactor and its 4am

* fixing image_size for augmentations

* Updated validator gpu requirements (BitMind-AI#113)

* splitting rewards over image and video (BitMind-AI#112)

* Update README.md (BitMind-AI#110)

* combining requirements files

* Combined requirements installation

* Improved formatting, added checks to prevent overwriting existing .env files.

* Re-added endpoint options

* Fixed incorrect diffusers install

* Fixed missing initialization of miner performance trackers

* [Testnet] Docs Updates (BitMind-AI#114)

* docs updates

* mining docs update

* Removed deprecated requirements files from github tests (BitMind-AI#118)

* [Testnet] Async Cache Updates (BitMind-AI#119)

* breaking out cache updates into their own process

* adding retries for loading vali info

* moving device config to data generation process

* typo

* removing old run_updater init arg, fixing dataset indexing

* only download 1 zip to start to provide data for vali on first boot

* cache deletion functionality

* log cache size

* name images with dataset prefix

* Increased minimum and recommended storage (BitMind-AI#120)

* [Testnet] Data download cleanup (BitMind-AI#121)

* moving download_data.py to base_miner/datasets

* removing unused args in download_data

* constants -> config

* docs updates for new paths

* updating outdated fn headers

* pep8

* use png codec, sample by framerate + num frames

* fps, min_fps, max_fps parameterization of sample

* return fps and num frames

* Fix registry module imports (BitMind-AI#123)

* Fix registry module imports

* Fixing config loading issues

* fixing frame sampling

* bugfix

* print label on testnet

* reenabling model verification

* update detector class names

* Fixing config_name arg for camo

* fixing detector config in camo

* fixing ref to self.config_name

* udpate default frame rate

* vidoe dataset creation example

* default config for video datasets

* update default num_videosg

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Update README.md

* README title

* removing samples from cache

* README

* fixing cache removal (BitMind-AI#125)

* Fixed tensor not being set to device for video challenges, causing errors when using cuda (BitMind-AI#126)

* Mainnet Prep (BitMind-AI#127)

* resetting challenge timer to 60s

* fix logging for miner history loading

* randomize model order, log gen time

* remove frame limit

* separate logging to after data check

* generate with batch=1 first for diverse data availability

* load v1 history path for smooth transition to new incentive

* prune extracted cache

* swapping url open-images for jpg

* removing unused config args

* shortening cache refresh timer

* cache optimizations

* typo

* better variable naming

* default to autocast

* log num files in cache along with GB

* surfacing max size gb variables

* cooked typo

* Fixed wrong validation split key string causing no transform to be applied

* Changed detector arg to be required

* fixing hotkey reset check

* removing logline

* clamp mcc at 0 so video doesn't negatively impact performant image miners

* typo

* improving cache logs

* prune after clear

* only update relevant tracker in reward

* improved logging, turned off cache removal in sample()

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* removign old reqs from autoupdate

* Re-added bitmind HF org prefix to dataset path

* shortening self heal timer

* autoupdate

* autoupdate

* sample size

* Validator Improvements: VRAM usage, logging (BitMind-AI#131)

* ensure vali process and cache update process do not consume any vram

* skip challenge if unable to create wandb Image/Video object (indicating corrupt file)

* manually set log level to info

* removing debug print

* enable_info in config

* cleanup

* version bump

* moved info log setting to config.py

* Bittensor 8.5.1 (BitMind-AI#133)

* bittensor 8.5.1

* bump package versoin

* Prompt Generation Pipeline Improvements (BitMind-AI#135)

* Release 2.0.3 (BitMind-AI#134)

Bittensor 8.5.1

* enhancing prompts by adding conveyed motion with llama

* Mining docs fix setup_miner_env.sh -> setup_env.sh

* [testnet] I2i/in painting (BitMind-AI#137)

* Initial i2i constants for in-painting

* Initial in painting functionality with mask (oval/rectangle) and annotation generation

* Refactor ipg to match sdg format, added caching and support for selecting from multiple in-painting models

* Fixed cache import, updated test script

* Separate cache for i2i when using run_data_generator

* Renamed synth cache constants, added support for multiple validator synth caches, and selection between i2i (20%) and t2i (80%) in forward

* Unifying InPaintingGenerator and SyntheticDataGenerator (BitMind-AI#136)

* WIP, unifying InpaintingGenerator and SyntheticDataGenerator

* minor simplification of forward flow

* simplifying forward flow

* standardizing cache structures with the introduction of task type subdirs

* adding i2i models to batch generation

* removing depracted InPaintingGenerator from run script

* adding --clear-cache option for validator

* updating SDG init params

* fixing last imports + directory structure references

* fixing images passed to generate function for i2i

* option to log masks/original images for i2i challenges

* fixing help hint for output-dir

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Updated image_annotation_generator to prompt_generator (BitMind-AI#138)

* bump version 2.0.3 -> 2.1.0

* testing cache clearing via autoupdate

* cranking up video rewards to .2

* Add DeepFloyd/IF model and multi-stage pipeline support

Added DeepFloyd/IF-I-XL + IF-II-L model configuration, pipeline_stages configuration for multi-stage models

* Moved multistage pipeline generator to SyntheticDataGenerator

* Args for testing specific model

* [TESTNET] HunyuanVideo (BitMind-AI#140)

* hunyuan video initial commit

* delete resolution from from_pretrained_args after extracting h,w

* model_id arg for from_pretrained

* standardizing model_id usage

* fixing autocast and torch_dtype for hunyuan

* adding resolution options and save options for all t2v models

* missing comma in config

* Update __init__.py

* updated subnet arch diagram

* README wip

* docs udpates

* README updates

* README updates

* more README udpates

* README updates

* README udpates

* README cleanup

* more README updates

* Fixing table border removal html for github

* fixing table html

* one last attempt at a prettier table

* one last last attempt at a prettier table

* bumping video rewards

* removing decay for unsampled miners

* README cleanup

* increasing suggested and min compute for validators

* README update, markdown fix in Incentive.md

* README tweak

* removing redundant dereg check from update_scores

* Deepfloyed specific configs, args for better cache/data gen testing, multistage pipeline i/o

* use largest deepfloyed-if I and II models, ensure no watermarker

* Fixed FLUX resolution format, added back model_id and scheduler loading for video models

* Add Janus-Pro-7B t2i model with custom diffuser pipeline class

* Janus repo install

* Removed custom wrapper files, added Janus DiffusionPipeline wrapper to model_utils, cleaned up configs

* Removed DiffusionPipeline import

* Uncomment wandb inits

* Move create_pipeline_generator() to model utils

* Moved model optimizations to model utils

* [Testnet] Mutli-Video Challenges (BitMind-AI#148)

* Implementation of frame stitching for 2 videos

* ComposeWithParams fix

* vflip + hflip fix

* wandb video logging fix courtesy of eric

* proper arg passing for prompt moderation

* version bump

* i2i crop guardrails

* Update config.py

Removing problematic resolution for CogVideoX5b

* explicit requirements install

* moving pm2 process stopping prior to model verification

* fix for no available vidoes in multi-video challenge generation

* Update forward.py

Mutli-video threshold 0.2

* [Testnet] Multiclass Rewards (BitMind-AI#150)

* multiclass protocols

* multiclass rewards

* facilitating smooth transition from old protocol to multiclass

* DTAO: Bittensor SDK 9.0.0 (BitMind-AI#152)

* Update requirements.txt

* version bump

* moving prediction backwards compatibility to synapase.deserialize

* mcc-based reward with rational transform

* cast predictions to np array upon loading miner history

* version bump

* [Testnet] video organics (BitMind-AI#151)

* improved vali proxy with video endpoint

* renaming endpoints

* Fixing vali proxy initialization

* make vali proxy async again

* handling testnet situation of low miner activity

* BytesIO import

* upgrading transformers

* switching to multipart form data

* Validator Proxy handling of Multiclass Responses (BitMind-AI#153)

* udpate vali proxy to return floats instead of vectors

* removing rational transform for now

* new incentive docs (BitMind-AI#154)

* python-multipart

* Semisynthetic Cache (BitMind-AI#158)

* new cache structure and related config vars

* refactored vai forward to be more modular

* cleanup

* restructure wip

* added dataset to cache dir hierachy, cleaned up data classes, better error reporting for missing frames

* fixing cache access order

* bugfixes for semisynthetic image cache and safer pruning

* config and logging cleanup

* cache clear for this release

---------

Co-authored-by: Dylan Uys <dylan.uys@gmai.com>

* version bump

* Changing mutliclass reward weight to .25

* uncommenting dlib

* bittensor==9.0.3

* Handling datasets with few files that don't need regular local updates

* fixing error logging variable names

---------

Co-authored-by: benliang99 <caliangben@gmail.com>
Co-authored-by: Andrew <caliangandrew@gmail.com>
Co-authored-by: Kenobi <108417131+kenobijon@users.noreply.github.com>
Co-authored-by: Dylan Uys <dylan.uys@gmai.com>

v2.2.0

Toggle v2.2.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Release 2.2.0 (BitMind-AI#155)

* Validator Proxy Response Update (BitMind-AI#103)

* adding rich arg, adding coldkeys and hotokeys

* moving rich to payload from headers

* bump version

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Two new image models: SDXL finetuned on Midjourney, and SD finetuned on anime images

* Added required StableDiffusionPipeline import

* Updated transformers version to fix tokenizer initialization error

* GPU Specification (BitMind-AI#108)

* Made gpu id specification consistent across synthetic image generation models

* Changed gpu_id to device

* Docstring grammar

* add neuron.device to SyntheticImageGenerator init

* Fixed variable names

* adding device to start_validator.sh

* deprecating old/biased random prompt generation

* properly clear gpu of moderation pipeline

* simplifying usage of self.device

* fixing moderation pipeline device

* explicitly defining model/tokenizer for moderation pipeline to avoid accelerate auto device management

* deprecating random prompt generation

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Update __init__.py

bump version

* removing logging

* old logging removed

* adding check for state file in case it is deleted somehow

* removing remaining random prompt generation code

* [Testnet] Video Challenges V1 (BitMind-AI#111)

* simple video challenge implementation wip

* dummy multimodal miner

* constants reorg

* updating verify_models script with t2v

* fixing MODEL_PIPELINE init

* cleanup

* __init__.py

* hasattr fix

* num_frames must be divisible by 8

* fixing dict iteration

* dummy response for videos

* fixing small bugs

* fixing video logging and compression

* apply image transforms uniformly to frames of video

* transform list of tensor to pil for synapse prep

* cleaning up vali forward

* miner function signatures to use Synapse base class instead of ImageSynapse

* vali requirements imageio and moviepy

* attaching separate video and image forward functions

* separating blacklist and priority fns for image/video synapses

* pred -> prediction

* initial synth video challenge flow

* initial video cache implementation

* video cache cleanup

* video zip downloads

* wip fairly large refactor of data generation, functionality and form

* generalized hf zip download fn

* had claude improve video_cache formatting

* vali forward cleanup

* cleanup + turning back on randomness for real/fake

* fix relative import

* wip moving video datasets to vali config

* Adding optimization flags to vali config

* check if captioning model already loaded

* async SyntheticDataGenerator wip

* async zip download

* ImageCache wip

* proper gpu clearing for moderation pipeline

* sdg cleanup

* new cache system WIP

* image/video cache updates

* cleaning up unused metadata arg, improving logging

* fixed frame sampling, parquet image extraction, image sampling

* synth data cache wip

* Moving sgd to its own pm2 process

* synthetic data gen memory management update

* mochi-1-preview

* util cleanup, new requirements

* ensure SyntheticDataGenerator process waits for ImageCache to populate

* adding new t2i models from main

* Fixing t2v model output saving

* miner cleanup

* Moving tall model weights to bitmind hf org

* removing test video pkl

* fixing circular import

* updating usage of hf_hub_download according to some breaking huggingface_hub changes

* adding ffmpeg to vali reqs

* adding back in video models in async generation after testing

* renaming UCF directory to DFB, since it now contains TALL

* remaining renames for UCF -> DFB

* pyffmpegg

* video compatible data augmentations

* Default values for level, data_aug_params for failure case

* switching image challenges back on

* using sample variable to store data for all challenge types

* disabling sequential_cpu_offload for CogVideoX5b

* logging metadata fields to w&b

* log challenge metadata

* bump version

* adding context manager for generation w different dtypes

* variable name fix in ComposeWithTransforms

* fixing broken DFB stuff in tall_detector.py

* removing unnecessary logging

* fixing outdated variable names

* cache refactor; moving shared functionality to BaseCache

* finally automating w&b project setting

* improving logs

* improving validator forward structure

* detector ABC cleanup + function headers

* adding try except for miner performance history loading

* fixing import

* cleaning up vali logging

* pep8 formatting video_utils

* cleaning up start_validator.sh, starting validator process before data gen

* shortening vali challenge timer

* moving data generation management to its own script & added w&B logging

* run_data_generator.py

* fixing full_path variable name

* changing w&b name for data generator

* yaml > json gang

* simplifying ImageCache.sample to always return one sample

* adding option to skip a challenge if no data are available in cache

* adding config vars for image/video detector

* cleaning up miner class, moving blacklist/priority to base

* updating call to image_cache.sample()

* fixing mochi gen to 84 frames

* fixing video data padding for miners

* updating setup script to create new .env file

* fixing weight loading after detector refactor

* model/detector separation for TALL & modifying base DFB code to allow device configuration

* standardizing video detector input to a frames tensor

* separation of concerns; moving all video preprocessing to detector class

* pep8 cleanup

* reformatting if statements

* temporarily removing initial dataset class

* standardizing config loading across video and image models

* finished VideoDataloader and supporting components

* moved save config file out of trian script

* backwards compatibility for ucf training

* moving data augmentation from RealFakeDataset to Dataset subclasses for video aug support

* cleaning up data augmentation and target_image_size

* import cleanup

* gitignore update

* fixing typos picked up by flake8

* fixing function name ty flake8

* fixing test fixtures

* disabling pytests for now, some are broken after refactor and its 4am

* fixing image_size for augmentations

* Updated validator gpu requirements (BitMind-AI#113)

* splitting rewards over image and video (BitMind-AI#112)

* Update README.md (BitMind-AI#110)

* combining requirements files

* Combined requirements installation

* Improved formatting, added checks to prevent overwriting existing .env files.

* Re-added endpoint options

* Fixed incorrect diffusers install

* Fixed missing initialization of miner performance trackers

* [Testnet] Docs Updates (BitMind-AI#114)

* docs updates

* mining docs update

* Removed deprecated requirements files from github tests (BitMind-AI#118)

* [Testnet] Async Cache Updates (BitMind-AI#119)

* breaking out cache updates into their own process

* adding retries for loading vali info

* moving device config to data generation process

* typo

* removing old run_updater init arg, fixing dataset indexing

* only download 1 zip to start to provide data for vali on first boot

* cache deletion functionality

* log cache size

* name images with dataset prefix

* Increased minimum and recommended storage (BitMind-AI#120)

* [Testnet] Data download cleanup (BitMind-AI#121)

* moving download_data.py to base_miner/datasets

* removing unused args in download_data

* constants -> config

* docs updates for new paths

* updating outdated fn headers

* pep8

* use png codec, sample by framerate + num frames

* fps, min_fps, max_fps parameterization of sample

* return fps and num frames

* Fix registry module imports (BitMind-AI#123)

* Fix registry module imports

* Fixing config loading issues

* fixing frame sampling

* bugfix

* print label on testnet

* reenabling model verification

* update detector class names

* Fixing config_name arg for camo

* fixing detector config in camo

* fixing ref to self.config_name

* udpate default frame rate

* vidoe dataset creation example

* default config for video datasets

* update default num_videosg

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Update README.md

* README title

* removing samples from cache

* README

* fixing cache removal (BitMind-AI#125)

* Fixed tensor not being set to device for video challenges, causing errors when using cuda (BitMind-AI#126)

* Mainnet Prep (BitMind-AI#127)

* resetting challenge timer to 60s

* fix logging for miner history loading

* randomize model order, log gen time

* remove frame limit

* separate logging to after data check

* generate with batch=1 first for diverse data availability

* load v1 history path for smooth transition to new incentive

* prune extracted cache

* swapping url open-images for jpg

* removing unused config args

* shortening cache refresh timer

* cache optimizations

* typo

* better variable naming

* default to autocast

* log num files in cache along with GB

* surfacing max size gb variables

* cooked typo

* Fixed wrong validation split key string causing no transform to be applied

* Changed detector arg to be required

* fixing hotkey reset check

* removing logline

* clamp mcc at 0 so video doesn't negatively impact performant image miners

* typo

* improving cache logs

* prune after clear

* only update relevant tracker in reward

* improved logging, turned off cache removal in sample()

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* removign old reqs from autoupdate

* Re-added bitmind HF org prefix to dataset path

* shortening self heal timer

* autoupdate

* autoupdate

* sample size

* Validator Improvements: VRAM usage, logging (BitMind-AI#131)

* ensure vali process and cache update process do not consume any vram

* skip challenge if unable to create wandb Image/Video object (indicating corrupt file)

* manually set log level to info

* removing debug print

* enable_info in config

* cleanup

* version bump

* moved info log setting to config.py

* Bittensor 8.5.1 (BitMind-AI#133)

* bittensor 8.5.1

* bump package versoin

* Prompt Generation Pipeline Improvements (BitMind-AI#135)

* Release 2.0.3 (BitMind-AI#134)

Bittensor 8.5.1

* enhancing prompts by adding conveyed motion with llama

* Mining docs fix setup_miner_env.sh -> setup_env.sh

* [testnet] I2i/in painting (BitMind-AI#137)

* Initial i2i constants for in-painting

* Initial in painting functionality with mask (oval/rectangle) and annotation generation

* Refactor ipg to match sdg format, added caching and support for selecting from multiple in-painting models

* Fixed cache import, updated test script

* Separate cache for i2i when using run_data_generator

* Renamed synth cache constants, added support for multiple validator synth caches, and selection between i2i (20%) and t2i (80%) in forward

* Unifying InPaintingGenerator and SyntheticDataGenerator (BitMind-AI#136)

* WIP, unifying InpaintingGenerator and SyntheticDataGenerator

* minor simplification of forward flow

* simplifying forward flow

* standardizing cache structures with the introduction of task type subdirs

* adding i2i models to batch generation

* removing depracted InPaintingGenerator from run script

* adding --clear-cache option for validator

* updating SDG init params

* fixing last imports + directory structure references

* fixing images passed to generate function for i2i

* option to log masks/original images for i2i challenges

* fixing help hint for output-dir

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Updated image_annotation_generator to prompt_generator (BitMind-AI#138)

* bump version 2.0.3 -> 2.1.0

* testing cache clearing via autoupdate

* cranking up video rewards to .2

* Add DeepFloyd/IF model and multi-stage pipeline support

Added DeepFloyd/IF-I-XL + IF-II-L model configuration, pipeline_stages configuration for multi-stage models

* Moved multistage pipeline generator to SyntheticDataGenerator

* Args for testing specific model

* [TESTNET] HunyuanVideo (BitMind-AI#140)

* hunyuan video initial commit

* delete resolution from from_pretrained_args after extracting h,w

* model_id arg for from_pretrained

* standardizing model_id usage

* fixing autocast and torch_dtype for hunyuan

* adding resolution options and save options for all t2v models

* missing comma in config

* Update __init__.py

* updated subnet arch diagram

* README wip

* docs udpates

* README updates

* README updates

* more README udpates

* README updates

* README udpates

* README cleanup

* more README updates

* Fixing table border removal html for github

* fixing table html

* one last attempt at a prettier table

* one last last attempt at a prettier table

* bumping video rewards

* removing decay for unsampled miners

* README cleanup

* increasing suggested and min compute for validators

* README update, markdown fix in Incentive.md

* README tweak

* removing redundant dereg check from update_scores

* Deepfloyed specific configs, args for better cache/data gen testing, multistage pipeline i/o

* use largest deepfloyed-if I and II models, ensure no watermarker

* Fixed FLUX resolution format, added back model_id and scheduler loading for video models

* Add Janus-Pro-7B t2i model with custom diffuser pipeline class

* Janus repo install

* Removed custom wrapper files, added Janus DiffusionPipeline wrapper to model_utils, cleaned up configs

* Removed DiffusionPipeline import

* Uncomment wandb inits

* Move create_pipeline_generator() to model utils

* Moved model optimizations to model utils

* [Testnet] Mutli-Video Challenges (BitMind-AI#148)

* Implementation of frame stitching for 2 videos

* ComposeWithParams fix

* vflip + hflip fix

* wandb video logging fix courtesy of eric

* proper arg passing for prompt moderation

* version bump

* i2i crop guardrails

* Update config.py

Removing problematic resolution for CogVideoX5b

* explicit requirements install

* moving pm2 process stopping prior to model verification

* fix for no available vidoes in multi-video challenge generation

* Update forward.py

Mutli-video threshold 0.2

* [Testnet] Multiclass Rewards (BitMind-AI#150)

* multiclass protocols

* multiclass rewards

* facilitating smooth transition from old protocol to multiclass

* DTAO: Bittensor SDK 9.0.0 (BitMind-AI#152)

* Update requirements.txt

* version bump

* moving prediction backwards compatibility to synapase.deserialize

* mcc-based reward with rational transform

* cast predictions to np array upon loading miner history

* version bump

* [Testnet] video organics (BitMind-AI#151)

* improved vali proxy with video endpoint

* renaming endpoints

* Fixing vali proxy initialization

* make vali proxy async again

* handling testnet situation of low miner activity

* BytesIO import

* upgrading transformers

* switching to multipart form data

* Validator Proxy handling of Multiclass Responses (BitMind-AI#153)

* udpate vali proxy to return floats instead of vectors

* removing rational transform for now

* new incentive docs (BitMind-AI#154)

* python-multipart

---------

Co-authored-by: benliang99 <caliangben@gmail.com>
Co-authored-by: Andrew <caliangandrew@gmail.com>
Co-authored-by: Kenobi <108417131+kenobijon@users.noreply.github.com>

v2.1.4

Toggle v2.1.4's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Release 2.1.4 (BitMind-AI#149)

* Validator Proxy Response Update (BitMind-AI#103)

* adding rich arg, adding coldkeys and hotokeys

* moving rich to payload from headers

* bump version

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Two new image models: SDXL finetuned on Midjourney, and SD finetuned on anime images

* Added required StableDiffusionPipeline import

* Updated transformers version to fix tokenizer initialization error

* GPU Specification (BitMind-AI#108)

* Made gpu id specification consistent across synthetic image generation models

* Changed gpu_id to device

* Docstring grammar

* add neuron.device to SyntheticImageGenerator init

* Fixed variable names

* adding device to start_validator.sh

* deprecating old/biased random prompt generation

* properly clear gpu of moderation pipeline

* simplifying usage of self.device

* fixing moderation pipeline device

* explicitly defining model/tokenizer for moderation pipeline to avoid accelerate auto device management

* deprecating random prompt generation

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Update __init__.py

bump version

* removing logging

* old logging removed

* adding check for state file in case it is deleted somehow

* removing remaining random prompt generation code

* [Testnet] Video Challenges V1 (BitMind-AI#111)

* simple video challenge implementation wip

* dummy multimodal miner

* constants reorg

* updating verify_models script with t2v

* fixing MODEL_PIPELINE init

* cleanup

* __init__.py

* hasattr fix

* num_frames must be divisible by 8

* fixing dict iteration

* dummy response for videos

* fixing small bugs

* fixing video logging and compression

* apply image transforms uniformly to frames of video

* transform list of tensor to pil for synapse prep

* cleaning up vali forward

* miner function signatures to use Synapse base class instead of ImageSynapse

* vali requirements imageio and moviepy

* attaching separate video and image forward functions

* separating blacklist and priority fns for image/video synapses

* pred -> prediction

* initial synth video challenge flow

* initial video cache implementation

* video cache cleanup

* video zip downloads

* wip fairly large refactor of data generation, functionality and form

* generalized hf zip download fn

* had claude improve video_cache formatting

* vali forward cleanup

* cleanup + turning back on randomness for real/fake

* fix relative import

* wip moving video datasets to vali config

* Adding optimization flags to vali config

* check if captioning model already loaded

* async SyntheticDataGenerator wip

* async zip download

* ImageCache wip

* proper gpu clearing for moderation pipeline

* sdg cleanup

* new cache system WIP

* image/video cache updates

* cleaning up unused metadata arg, improving logging

* fixed frame sampling, parquet image extraction, image sampling

* synth data cache wip

* Moving sgd to its own pm2 process

* synthetic data gen memory management update

* mochi-1-preview

* util cleanup, new requirements

* ensure SyntheticDataGenerator process waits for ImageCache to populate

* adding new t2i models from main

* Fixing t2v model output saving

* miner cleanup

* Moving tall model weights to bitmind hf org

* removing test video pkl

* fixing circular import

* updating usage of hf_hub_download according to some breaking huggingface_hub changes

* adding ffmpeg to vali reqs

* adding back in video models in async generation after testing

* renaming UCF directory to DFB, since it now contains TALL

* remaining renames for UCF -> DFB

* pyffmpegg

* video compatible data augmentations

* Default values for level, data_aug_params for failure case

* switching image challenges back on

* using sample variable to store data for all challenge types

* disabling sequential_cpu_offload for CogVideoX5b

* logging metadata fields to w&b

* log challenge metadata

* bump version

* adding context manager for generation w different dtypes

* variable name fix in ComposeWithTransforms

* fixing broken DFB stuff in tall_detector.py

* removing unnecessary logging

* fixing outdated variable names

* cache refactor; moving shared functionality to BaseCache

* finally automating w&b project setting

* improving logs

* improving validator forward structure

* detector ABC cleanup + function headers

* adding try except for miner performance history loading

* fixing import

* cleaning up vali logging

* pep8 formatting video_utils

* cleaning up start_validator.sh, starting validator process before data gen

* shortening vali challenge timer

* moving data generation management to its own script & added w&B logging

* run_data_generator.py

* fixing full_path variable name

* changing w&b name for data generator

* yaml > json gang

* simplifying ImageCache.sample to always return one sample

* adding option to skip a challenge if no data are available in cache

* adding config vars for image/video detector

* cleaning up miner class, moving blacklist/priority to base

* updating call to image_cache.sample()

* fixing mochi gen to 84 frames

* fixing video data padding for miners

* updating setup script to create new .env file

* fixing weight loading after detector refactor

* model/detector separation for TALL & modifying base DFB code to allow device configuration

* standardizing video detector input to a frames tensor

* separation of concerns; moving all video preprocessing to detector class

* pep8 cleanup

* reformatting if statements

* temporarily removing initial dataset class

* standardizing config loading across video and image models

* finished VideoDataloader and supporting components

* moved save config file out of trian script

* backwards compatibility for ucf training

* moving data augmentation from RealFakeDataset to Dataset subclasses for video aug support

* cleaning up data augmentation and target_image_size

* import cleanup

* gitignore update

* fixing typos picked up by flake8

* fixing function name ty flake8

* fixing test fixtures

* disabling pytests for now, some are broken after refactor and its 4am

* fixing image_size for augmentations

* Updated validator gpu requirements (BitMind-AI#113)

* splitting rewards over image and video (BitMind-AI#112)

* Update README.md (BitMind-AI#110)

* combining requirements files

* Combined requirements installation

* Improved formatting, added checks to prevent overwriting existing .env files.

* Re-added endpoint options

* Fixed incorrect diffusers install

* Fixed missing initialization of miner performance trackers

* [Testnet] Docs Updates (BitMind-AI#114)

* docs updates

* mining docs update

* Removed deprecated requirements files from github tests (BitMind-AI#118)

* [Testnet] Async Cache Updates (BitMind-AI#119)

* breaking out cache updates into their own process

* adding retries for loading vali info

* moving device config to data generation process

* typo

* removing old run_updater init arg, fixing dataset indexing

* only download 1 zip to start to provide data for vali on first boot

* cache deletion functionality

* log cache size

* name images with dataset prefix

* Increased minimum and recommended storage (BitMind-AI#120)

* [Testnet] Data download cleanup (BitMind-AI#121)

* moving download_data.py to base_miner/datasets

* removing unused args in download_data

* constants -> config

* docs updates for new paths

* updating outdated fn headers

* pep8

* use png codec, sample by framerate + num frames

* fps, min_fps, max_fps parameterization of sample

* return fps and num frames

* Fix registry module imports (BitMind-AI#123)

* Fix registry module imports

* Fixing config loading issues

* fixing frame sampling

* bugfix

* print label on testnet

* reenabling model verification

* update detector class names

* Fixing config_name arg for camo

* fixing detector config in camo

* fixing ref to self.config_name

* udpate default frame rate

* vidoe dataset creation example

* default config for video datasets

* update default num_videosg

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Update README.md

* README title

* removing samples from cache

* README

* fixing cache removal (BitMind-AI#125)

* Fixed tensor not being set to device for video challenges, causing errors when using cuda (BitMind-AI#126)

* Mainnet Prep (BitMind-AI#127)

* resetting challenge timer to 60s

* fix logging for miner history loading

* randomize model order, log gen time

* remove frame limit

* separate logging to after data check

* generate with batch=1 first for diverse data availability

* load v1 history path for smooth transition to new incentive

* prune extracted cache

* swapping url open-images for jpg

* removing unused config args

* shortening cache refresh timer

* cache optimizations

* typo

* better variable naming

* default to autocast

* log num files in cache along with GB

* surfacing max size gb variables

* cooked typo

* Fixed wrong validation split key string causing no transform to be applied

* Changed detector arg to be required

* fixing hotkey reset check

* removing logline

* clamp mcc at 0 so video doesn't negatively impact performant image miners

* typo

* improving cache logs

* prune after clear

* only update relevant tracker in reward

* improved logging, turned off cache removal in sample()

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* removign old reqs from autoupdate

* Re-added bitmind HF org prefix to dataset path

* shortening self heal timer

* autoupdate

* autoupdate

* sample size

* Validator Improvements: VRAM usage, logging (BitMind-AI#131)

* ensure vali process and cache update process do not consume any vram

* skip challenge if unable to create wandb Image/Video object (indicating corrupt file)

* manually set log level to info

* removing debug print

* enable_info in config

* cleanup

* version bump

* moved info log setting to config.py

* Bittensor 8.5.1 (BitMind-AI#133)

* bittensor 8.5.1

* bump package versoin

* Prompt Generation Pipeline Improvements (BitMind-AI#135)

* Release 2.0.3 (BitMind-AI#134)

Bittensor 8.5.1

* enhancing prompts by adding conveyed motion with llama

* Mining docs fix setup_miner_env.sh -> setup_env.sh

* [testnet] I2i/in painting (BitMind-AI#137)

* Initial i2i constants for in-painting

* Initial in painting functionality with mask (oval/rectangle) and annotation generation

* Refactor ipg to match sdg format, added caching and support for selecting from multiple in-painting models

* Fixed cache import, updated test script

* Separate cache for i2i when using run_data_generator

* Renamed synth cache constants, added support for multiple validator synth caches, and selection between i2i (20%) and t2i (80%) in forward

* Unifying InPaintingGenerator and SyntheticDataGenerator (BitMind-AI#136)

* WIP, unifying InpaintingGenerator and SyntheticDataGenerator

* minor simplification of forward flow

* simplifying forward flow

* standardizing cache structures with the introduction of task type subdirs

* adding i2i models to batch generation

* removing depracted InPaintingGenerator from run script

* adding --clear-cache option for validator

* updating SDG init params

* fixing last imports + directory structure references

* fixing images passed to generate function for i2i

* option to log masks/original images for i2i challenges

* fixing help hint for output-dir

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Updated image_annotation_generator to prompt_generator (BitMind-AI#138)

* bump version 2.0.3 -> 2.1.0

* testing cache clearing via autoupdate

* cranking up video rewards to .2

* Add DeepFloyd/IF model and multi-stage pipeline support

Added DeepFloyd/IF-I-XL + IF-II-L model configuration, pipeline_stages configuration for multi-stage models

* Moved multistage pipeline generator to SyntheticDataGenerator

* Args for testing specific model

* [TESTNET] HunyuanVideo (BitMind-AI#140)

* hunyuan video initial commit

* delete resolution from from_pretrained_args after extracting h,w

* model_id arg for from_pretrained

* standardizing model_id usage

* fixing autocast and torch_dtype for hunyuan

* adding resolution options and save options for all t2v models

* missing comma in config

* Update __init__.py

* updated subnet arch diagram

* README wip

* docs udpates

* README updates

* README updates

* more README udpates

* README updates

* README udpates

* README cleanup

* more README updates

* Fixing table border removal html for github

* fixing table html

* one last attempt at a prettier table

* one last last attempt at a prettier table

* bumping video rewards

* removing decay for unsampled miners

* README cleanup

* increasing suggested and min compute for validators

* README update, markdown fix in Incentive.md

* README tweak

* removing redundant dereg check from update_scores

* Deepfloyed specific configs, args for better cache/data gen testing, multistage pipeline i/o

* use largest deepfloyed-if I and II models, ensure no watermarker

* Fixed FLUX resolution format, added back model_id and scheduler loading for video models

* Add Janus-Pro-7B t2i model with custom diffuser pipeline class

* Janus repo install

* Removed custom wrapper files, added Janus DiffusionPipeline wrapper to model_utils, cleaned up configs

* Removed DiffusionPipeline import

* Uncomment wandb inits

* Move create_pipeline_generator() to model utils

* Moved model optimizations to model utils

* [Testnet] Mutli-Video Challenges (BitMind-AI#148)

* Implementation of frame stitching for 2 videos

* ComposeWithParams fix

* vflip + hflip fix

* wandb video logging fix courtesy of eric

* proper arg passing for prompt moderation

* version bump

* i2i crop guardrails

* Update config.py

Removing problematic resolution for CogVideoX5b

* explicit requirements install

* moving pm2 process stopping prior to model verification

* fix for no available vidoes in multi-video challenge generation

* Update forward.py

Mutli-video threshold 0.2

---------

Co-authored-by: benliang99 <caliangben@gmail.com>
Co-authored-by: Andrew <caliangandrew@gmail.com>
Co-authored-by: Kenobi <108417131+kenobijon@users.noreply.github.com>

2.1.2

Toggle 2.1.2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Score Adjustment (BitMind-AI#144)

* increasing ema alpha slightly, reintroducing a weaker decay to score

* version bump

v2.1.1

Toggle v2.1.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Release 2.1.1 (BitMind-AI#141)

* Validator Proxy Response Update (BitMind-AI#103)

* adding rich arg, adding coldkeys and hotokeys

* moving rich to payload from headers

* bump version

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Two new image models: SDXL finetuned on Midjourney, and SD finetuned on anime images

* Added required StableDiffusionPipeline import

* Updated transformers version to fix tokenizer initialization error

* GPU Specification (BitMind-AI#108)

* Made gpu id specification consistent across synthetic image generation models

* Changed gpu_id to device

* Docstring grammar

* add neuron.device to SyntheticImageGenerator init

* Fixed variable names

* adding device to start_validator.sh

* deprecating old/biased random prompt generation

* properly clear gpu of moderation pipeline

* simplifying usage of self.device

* fixing moderation pipeline device

* explicitly defining model/tokenizer for moderation pipeline to avoid accelerate auto device management

* deprecating random prompt generation

---------

Co-authored-by: benliang99 <caliangben@gmail.com>

* Update __init__.py

bump version

* removing logging

* old logging removed

* adding check for state file in case it is deleted somehow

* removing remaining random prompt generation code

* [Testnet] Video Challenges V1 (BitMind-AI#111)

* simple video challenge implementation wip

* dummy multimodal miner

* constants reorg

* updating verify_models script with t2v

* fixing MODEL_PIPELINE init

* cleanup

* __init__.py

* hasattr fix

* num_frames must be divisible by 8

* fixing dict iteration

* dummy response for videos

* fixing small bugs

* fixing video logging and compression

* apply image transforms uniformly to frames of video

* transform list of tensor to pil for synapse prep

* cleaning up vali forward

* miner function signatures to use Synapse base class instead of ImageSynapse

* vali requirements imageio and moviepy

* attaching separate video and image forward functions

* separating blacklist and priority fns for image/video synapses

* pred -> prediction

* initial synth video challenge flow

* initial video cache implementation

* video cache cleanup

* video zip downloads

* wip fairly large refactor of data generation, functionality and form

* generalized hf zip download fn

* had claude improve video_cache formatting

* vali forward cleanup

* cleanup + turning back on randomness for real/fake

* fix relative import

* wip moving video datasets to vali config

* Adding optimization flags to vali config

* check if captioning model already loaded

* async SyntheticDataGenerator wip

* async zip download

* ImageCache wip

* proper gpu clearing for moderation pipeline

* sdg cleanup

* new cache system WIP

* image/video cache updates

* cleaning up unused metadata arg, improving logging

* fixed frame sampling, parquet image extraction, image sampling

* synth data cache wip

* Moving sgd to its own pm2 process

* synthetic data gen memory management update

* mochi-1-preview

* util cleanup, new requirements

* ensure SyntheticDataGenerator process waits for ImageCache to populate

* adding new t2i models from main

* Fixing t2v model output saving

* miner cleanup

* Moving tall model weights to bitmind hf org

* removing test video pkl

* fixing circular import

* updating usage of hf_hub_download according to some breaking huggingface_hub changes

* adding ffmpeg to vali reqs

* adding back in video models in async generation after testing

* renaming UCF directory to DFB, since it now contains TALL

* remaining renames for UCF -> DFB

* pyffmpegg

* video compatible data augmentations

* Default values for level, data_aug_params for failure case

* switching image challenges back on

* using sample variable to store data for all challenge types

* disabling sequential_cpu_offload for CogVideoX5b

* logging metadata fields to w&b

* log challenge metadata

* bump version

* adding context manager for generation w different dtypes

* variable name fix in ComposeWithTransforms

* fixing broken DFB stuff in tall_detector.py

* removing unnecessary logging

* fixing outdated variable names

* cache refactor; moving shared functionality to BaseCache

* finally automating w&b project setting

* improving logs

* improving validator forward structure

* detector ABC cleanup + function headers

* adding try except for miner performance history loading

* fixing import

* cleaning up vali logging

* pep8 formatting video_utils

* cleaning up start_validator.sh, starting validator process before data gen

* shortening vali challenge timer

* moving data generation management to its own script & added w&B logging

* run_data_generator.py

* fixing full_path variable name

* changing w&b name for data generator

* yaml > json gang

* simplifying ImageCache.sample to always return one sample

* adding option to skip a challenge if no data are available in cache

* adding config vars for image/video detector

* cleaning up miner class, moving blacklist/priority to base

* updating call to image_cache.sample()

* fixing mochi gen to 84 frames

* fixing video data padding for miners

* updating setup script to create new .env file

* fixing weight loading after detector refactor

* model/detector separation for TALL & modifying base DFB code to allow device configuration

* standardizing video detector input to a frames tensor

* separation of concerns; moving all video preprocessing to detector class

* pep8 cleanup

* reformatting if statements

* temporarily removing initial dataset class

* standardizing config loading across video and image models

* finished VideoDataloader and supporting components

* moved save config file out of trian script

* backwards compatibility for ucf training

* moving data augmentation from RealFakeDataset to Dataset subclasses for video aug support

* cleaning up data augmentation and target_image_size

* import cleanup

* gitignore update

* fixing typos picked up by flake8

* fixing function name ty flake8

* fixing test fixtures

* disabling pytests for now, some are broken after refactor and its 4am

* fixing image_size for augmentations

* Updated validator gpu requirements (BitMind-AI#113)

* splitting rewards over image and video (BitMind-AI#112)

* Update README.md (BitMind-AI#110)

* combining requirements files

* Combined requirements installation

* Improved formatting, added checks to prevent overwriting existing .env files.

* Re-added endpoint options

* Fixed incorrect diffusers install

* Fixed missing initialization of miner performance trackers

* [Testnet] Docs Updates (BitMind-AI#114)

* docs updates

* mining docs update

* Removed deprecated requirements files from github tests (BitMind-AI#118)

* [Testnet] Async Cache Updates (BitMind-AI#119)

* breaking out cache updates into their own process

* adding retries for loading vali info

* moving device config to data generation process

* typo

* removing old run_updater init arg, fixing dataset indexing

* only download 1 zip to start to provide data for vali on first boot

* cache deletion functionality

* log cache size

* name images with dataset prefix

* Increased minimum and recommended storage (BitMind-AI#120)

* [Testnet] Data download cleanup (BitMind-AI#121)

* moving download_data.py to base_miner/datasets

* removing unused args in download_data

* constants -> config

* docs updates for new paths

* updating outdated fn headers

* pep8

* use png codec, sample by framerate + num frames

* fps, min_fps, max_fps parameterization of sample

* return fps and num frames

* Fix registry module imports (BitMind-AI#123)

* Fix registry module imports

* Fixing config loading issues

* fixing frame sampling

* bugfix

* print label on testnet

* reenabling model verification

* update detector class names

* Fixing config_name arg for camo

* fixing detector config in camo

* fixing ref to self.config_name

* udpate default frame rate

* vidoe dataset creation example

* default config for video datasets

* update default num_videosg

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Update README.md

* README title

* removing samples from cache

* README

* fixing cache removal (BitMind-AI#125)

* Fixed tensor not being set to device for video challenges, causing errors when using cuda (BitMind-AI#126)

* Mainnet Prep (BitMind-AI#127)

* resetting challenge timer to 60s

* fix logging for miner history loading

* randomize model order, log gen time

* remove frame limit

* separate logging to after data check

* generate with batch=1 first for diverse data availability

* load v1 history path for smooth transition to new incentive

* prune extracted cache

* swapping url open-images for jpg

* removing unused config args

* shortening cache refresh timer

* cache optimizations

* typo

* better variable naming

* default to autocast

* log num files in cache along with GB

* surfacing max size gb variables

* cooked typo

* Fixed wrong validation split key string causing no transform to be applied

* Changed detector arg to be required

* fixing hotkey reset check

* removing logline

* clamp mcc at 0 so video doesn't negatively impact performant image miners

* typo

* improving cache logs

* prune after clear

* only update relevant tracker in reward

* improved logging, turned off cache removal in sample()

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* removign old reqs from autoupdate

* Re-added bitmind HF org prefix to dataset path

* shortening self heal timer

* autoupdate

* autoupdate

* sample size

* Validator Improvements: VRAM usage, logging (BitMind-AI#131)

* ensure vali process and cache update process do not consume any vram

* skip challenge if unable to create wandb Image/Video object (indicating corrupt file)

* manually set log level to info

* removing debug print

* enable_info in config

* cleanup

* version bump

* moved info log setting to config.py

* Bittensor 8.5.1 (BitMind-AI#133)

* bittensor 8.5.1

* bump package versoin

* Prompt Generation Pipeline Improvements (BitMind-AI#135)

* Release 2.0.3 (BitMind-AI#134)

Bittensor 8.5.1

* enhancing prompts by adding conveyed motion with llama

* Mining docs fix setup_miner_env.sh -> setup_env.sh

* [testnet] I2i/in painting (BitMind-AI#137)

* Initial i2i constants for in-painting

* Initial in painting functionality with mask (oval/rectangle) and annotation generation

* Refactor ipg to match sdg format, added caching and support for selecting from multiple in-painting models

* Fixed cache import, updated test script

* Separate cache for i2i when using run_data_generator

* Renamed synth cache constants, added support for multiple validator synth caches, and selection between i2i (20%) and t2i (80%) in forward

* Unifying InPaintingGenerator and SyntheticDataGenerator (BitMind-AI#136)

* WIP, unifying InpaintingGenerator and SyntheticDataGenerator

* minor simplification of forward flow

* simplifying forward flow

* standardizing cache structures with the introduction of task type subdirs

* adding i2i models to batch generation

* removing depracted InPaintingGenerator from run script

* adding --clear-cache option for validator

* updating SDG init params

* fixing last imports + directory structure references

* fixing images passed to generate function for i2i

* option to log masks/original images for i2i challenges

* fixing help hint for output-dir

---------

Co-authored-by: Andrew <caliangandrew@gmail.com>

* Updated image_annotation_generator to prompt_generator (BitMind-AI#138)

* bump version 2.0.3 -> 2.1.0

* testing cache clearing via autoupdate

* cranking up video rewards to .2

* [TESTNET] HunyuanVideo (BitMind-AI#140)

* hunyuan video initial commit

* delete resolution from from_pretrained_args after extracting h,w

* model_id arg for from_pretrained

* standardizing model_id usage

* fixing autocast and torch_dtype for hunyuan

* adding resolution options and save options for all t2v models

* missing comma in config

* Update __init__.py

* updated subnet arch diagram

* README wip

* docs udpates

* README updates

* README updates

* more README udpates

* README updates

* README udpates

* README cleanup

* more README updates

* Fixing table border removal html for github

* fixing table html

* one last attempt at a prettier table

* one last last attempt at a prettier table

* bumping video rewards

* removing decay for unsampled miners

* README cleanup

* increasing suggested and min compute for validators

* README update, markdown fix in Incentive.md

* README tweak

* removing redundant dereg check from update_scores

---------

Co-authored-by: benliang99 <caliangben@gmail.com>
Co-authored-by: Andrew <caliangandrew@gmail.com>
Co-authored-by: Kenobi <108417131+kenobijon@users.noreply.github.com>