Skip to content

Fix olap_workload #12849

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 22, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 4 additions & 8 deletions ydb/tools/olap_workload/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -149,8 +149,7 @@ def drop_all_columns(self, table_name):
if c.name != "id":
self.drop_column(table_name, c.name)

def queries_while_alter(self):
table_name = "queries_while_alter"
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Почему то все операции были с одной таблицей которая захардкожена

def queries_while_alter(self, table_name):

schema = self.list_columns(table_name)

Expand All @@ -174,14 +173,11 @@ def run(self):

while time.time() - started_at < self.duration:
try:
self.create_table("queries_while_alter")

# TODO: drop only created tables by current workload
self.drop_all_tables()

self.queries_while_alter()

table_name = table_name_with_timestamp()
self.create_table(table_name)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Тут я что-то не понял, создавались какие-то рандомные таблицы которые никак не использовались

self.queries_while_alter(table_name)
except Exception as e:
print(type(e), e)

Expand All @@ -192,7 +188,7 @@ def run(self):
)
parser.add_argument('--endpoint', default='localhost:2135', help="An endpoint to be used")
parser.add_argument('--database', default=None, required=True, help='A database to connect')
parser.add_argument('--duration', default=120, type=lambda x: int(x), help='A duration of workload in seconds.')
parser.add_argument('--duration', default=10 ** 9, type=lambda x: int(x), help='A duration of workload in seconds.')
parser.add_argument('--batch_size', default=1000, help='Batch size for bulk insert')
args = parser.parse_args()
with Workload(args.endpoint, args.database, args.duration, args.batch_size) as workload:
Expand Down
Loading