You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+23-16Lines changed: 23 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -70,23 +70,23 @@ datacustomcode run ./payload/entrypoint.py
70
70
71
71
After modifying the `entrypoint.py` as needed, using any dependencies you add in the `.venv` virtual environment, you can run this script in Data Cloud:
72
72
73
-
**Adding Dependencies**: To add new dependencies:
73
+
**To Add New Dependencies**:
74
74
1. Make sure your virtual environment is activated
75
75
2. Add dependencies to `requirements.txt`
76
76
3. Run `pip install -r requirements.txt`
77
77
4. The SDK automatically packages all dependencies when you run `datacustomcode zip`
> The `deploy` process can take several minutes. If you'd like more feedback on the underlying process, you can add `--debug` to the command like `datacustomcode --debug deploy --path ./payload --name my_custom_script`
86
-
>
86
+
87
87
> [!NOTE]
88
-
> **Compute Types**: Choose the appropriate compute type based on your workload requirements:
89
-
> -**CPU_L/CPU_XL/CPU_2XL/CPU_4XL**: Large, X-Large, 2X-Large and 4X-Large CPU instances for data processing
88
+
> **CPU Size**: Choose the appropriate CPU/Compute Size based on your workload requirements:
89
+
> -**CPU_L / CPU_XL / CPU_2XL / CPU_4XL**: Large, X-Large, 2X-Large and 4X-Large CPU instances for data processing
90
90
> - Default is `CPU_2XL` which provides a good balance of performance and cost for most use cases
91
91
92
92
You can now use the Salesforce Data Cloud UI to find the created Data Transform and use the `Run Now` button to run it.
-`--profile TEXT`: Credential profile name (default: "default")
162
-
-`--path TEXT`: Path to the code directory (default: ".")
163
-
-`--name TEXT`: Name of the transformation job [required]
164
-
-`--version TEXT`: Version of the transformation job (default: "0.0.1")
165
-
-`--description TEXT`: Description of the transformation job (default: "")
166
-
-`--compute-type TEXT`: Compute type for the deployment (default: "CPU_XL"). Available options: CPU_L(Large), CPU_XL (Extra Large), CPU_2XL (2X Large), CPU_4XL (4X Large)
167
157
168
158
#### `datacustomcode init`
169
159
Initialize a new development environment with a template.
170
160
171
161
Argument:
172
162
-`DIRECTORY`: Directory to create project in (default: ".")
173
163
164
+
174
165
#### `datacustomcode scan`
175
166
Scan a Python file to generate a Data Cloud configuration.
176
167
@@ -181,6 +172,7 @@ Options:
181
172
-`--config TEXT`: Path to save the configuration file (default: same directory as FILENAME)
182
173
-`--dry-run`: Preview the configuration without saving to a file
183
174
175
+
184
176
#### `datacustomcode run`
185
177
Run an entrypoint file locally for testing.
186
178
@@ -191,12 +183,26 @@ Options:
191
183
-`--config-file TEXT`: Path to configuration file
192
184
-`--dependencies TEXT`: Additional dependencies (can be specified multiple times)
193
185
186
+
194
187
#### `datacustomcode zip`
195
188
Zip a transformation job in preparation to upload to Data Cloud.
196
189
197
190
Options:
198
191
-`--path TEXT`: Path to the code directory (default: ".")
199
192
193
+
194
+
#### `datacustomcode deploy`
195
+
Deploy a transformation job to Data Cloud.
196
+
197
+
Options:
198
+
-`--profile TEXT`: Credential profile name (default: "default")
199
+
-`--path TEXT`: Path to the code directory (default: ".")
200
+
-`--name TEXT`: Name of the transformation job [required]
201
+
-`--version TEXT`: Version of the transformation job (default: "0.0.1")
202
+
-`--description TEXT`: Description of the transformation job (default: "")
203
+
-`--cpu-size TEXT`: CPU size for the deployment (default: "CPU_XL"). Available options: CPU_L(Large), CPU_XL(Extra Large), CPU_2XL(2X Large), CPU_4XL(4X Large)
204
+
205
+
200
206
## Docker usage
201
207
202
208
The SDK provides Docker-based development options that allow you to test your code in an environment that closely resembles Data Cloud's execution environment.
@@ -274,4 +280,5 @@ You now have all fields necessary for the `datacustomcode configure` command.
0 commit comments