You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your issue may already be reported!
Also, please search on the issue tracker before creating one.
I'm submitting a ...
bug report
feature request
support request => Please do not submit support request here, see note at the top of this template.
Issue Description
When running the example code provided on the README file, the api.score function throws an exception during an internal call due to missing arguments.
Expected Behavior
A score value to be calculated based on the predictions and actual values.
Current Behavior
A TypeError is thrown due to an internal call in the score function.
Your Code
fromautoPyTorch.api.time_series_forecastingimportTimeSeriesForecastingTask# data and metric importsfromsktime.datasetsimportload_longleytargets, features=load_longley()
# define the forecasting horizonforecasting_horizon=3# Dataset optimized by APT-TS can be a list of np.ndarray/ pd.DataFrame where each series represents an element in the # list, or a single pd.DataFrame that records the series# index information: to which series the timestep belongs? This id can be stored as the DataFrame's index or a separate# column# Within each series, we take the last forecasting_horizon as test targets. The items before that as training targets# Normally the value to be forecasted should follow the training setsy_train= [targets[: -forecasting_horizon]]
y_test= [targets[-forecasting_horizon:]]
# same for features. For uni-variant models, X_train, X_test can be omitted and set as NoneX_train= [features[: -forecasting_horizon]]
# Here x_test indicates the 'known future features': they are the features known previously, features that are unknown# could be replaced with NAN or zeros (which will not be used by our networks). If no feature is known beforehand,# we could also omit X_testknown_future_features=list(features.columns)
X_test= [features[-forecasting_horizon:]]
start_times= [targets.index.to_timestamp()[0]]
freq='1Y'# initialise Auto-PyTorch apiapi=TimeSeriesForecastingTask()
# Search for an ensemble of machine learning algorithmsapi.search(
X_train=X_train,
y_train=y_train,
X_test=X_test,
optimize_metric='mean_MAPE_forecasting',
n_prediction_steps=forecasting_horizon,
memory_limit=16*1024, # Currently, forecasting models use much more memoriesfreq=freq,
start_times=start_times,
func_eval_time_limit_secs=50,
total_walltime_limit=60,
min_num_test_instances=1000, # proxy validation sets. This only works for the tasks with more than 1000 seriesknown_future_features=known_future_features,
)
# our dataset could directly generate sequences for new datasetstest_sets=api.dataset.generate_test_seqs()
# Calculate test accuracyy_pred=api.predict(test_sets)
score=api.score(y_pred, y_test)
print("Forecasting score", score)
Error message
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-4599e5448863> in <module>
52 # Calculate test accuracy
53 y_pred = api.predict(test_sets)
---> 54 score = api.score(y_pred, y_test)
55 print("Forecasting score", score)
/usr/local/lib/python3.8/dist-packages/autoPyTorch/api/base_task.py in score(self, y_pred, y_test)
1908 raise ValueError("AutoPytorch failed to infer a task type from the dataset "
1909 "Please check the log file for related errors. ")
-> 1910 return calculate_score(target=y_test, prediction=y_pred,
1911 task_type=STRING_TO_TASK_TYPES[self.task_type],
1912 metrics=[self._metric])
/usr/local/lib/python3.8/dist-packages/autoPyTorch/pipeline/components/training/metrics/utils.py in calculate_score(target, prediction, task_type, metrics, **score_kwargs)
144 score_dict[metric_.name] = metric_._sign * metric_(target_scaled, cprediction_scaled, **score_kwargs)
145 else:
--> 146 score_dict[metric_.name] = metric_._sign * metric_(target, cprediction, **score_kwargs)
147 elif task_type in REGRESSION_TASKS:
148 cprediction = sanitize_array(prediction)
TypeError: __call__() missing 2 required positional arguments: 'sp' and 'n_prediction_steps'
Please, are you still able to run Auto-PyTorch on Google Colab?
I'm having trouble during installation on Colab. Tried many ways but doesn't install.
How did you installed it? Thanks @antbz
NOTE: ISSUES ARE NOT FOR CODE HELP - Ask for Help at https://stackoverflow.com
Your issue may already be reported!
Also, please search on the issue tracker before creating one.
Issue Description
When running the example code provided on the README file, the
api.score
function throws an exception during an internal call due to missing arguments.Expected Behavior
A score value to be calculated based on the predictions and actual values.
Current Behavior
A TypeError is thrown due to an internal call in the
score
function.Your Code
Error message
Local environment
pip freeze
The text was updated successfully, but these errors were encountered: