-
Notifications
You must be signed in to change notification settings - Fork 10k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[iheartradio:podcast] Add new extractor #27037
Conversation
you should split the extractor into:
|
@remitamine Done, also made a few fixes. |
904c312
to
00ae1f6
Compare
Made commit message a bit shorter. |
still the episode and podcast code is mixed up, the code that is specific and only related to podcasts should not be put in the base extractor, the same for the episode extractor. |
305f463
to
a1c4403
Compare
This last commit should be good. |
youtube_dl/extractor/iheartradio.py
Outdated
episodes = self._get_all_episodes(podcast_id, temp_user) | ||
episode_ids = [episode['id'] for episode in episodes] | ||
|
||
streams_info = self._get_streams_info(podcast_id, episode_ids, | ||
temp_user, podcast_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, you should delegate the extraction to the episode extractor.
youtube_dl/extractor/iheartradio.py
Outdated
# Register anonymous user, same behavior as web app | ||
def _register_temp_user(self, current_id): | ||
random_device_id = compat_str(uuid.uuid4()) | ||
random_oauth_id = compat_str(uuid.uuid4()) | ||
|
||
register_user_values = urlencode_postdata({ | ||
'accessToken': 'anon', | ||
'accessTokenType': 'anon', | ||
'deviceId': random_device_id, | ||
'deviceName': "web-desktop", | ||
'host': "webapp.WW", | ||
'oauthUuid': random_oauth_id, | ||
'userName': 'anon' + random_oauth_id | ||
}) | ||
return self._download_json( | ||
'https://ww.api.iheart.com/api/v1/account/loginOrCreateOauthUser', | ||
current_id, "Registering temporary user", data=register_user_values, | ||
headers={'Accept': 'application/json, text/plain, */*', | ||
'X-hostName': 'webapp.WW'}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can be avoided by using API v3 for all requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how I would do that, I was just looking at the network requests done by the web frontend and copying that behavior.
Also, what are we trying to avoid? Other API calls require the output from this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how I would do that, I was just looking at the network requests done by the web frontend and copying that behavior.
Also, what are we trying to avoid? Other API calls require the output from this.
trying to avoid additional requests, I think to move quickly with this PR, I would make a simpler implementation, and then you can base your modification on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the extractor that relies on API v3 has been added in 9c484c0.
youtube_dl/extractor/iheartradio.py
Outdated
) | ||
|
||
|
||
# To get the audio files, we have to use their internal API |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need to add comments that can be easily deduced from the code.
youtube_dl/extractor/iheartradio.py
Outdated
# Release date timestamp is in milliseconds | ||
release_date = content_info.get('startDate') | ||
if isinstance(release_date, Number) and release_date > 2000000000: | ||
release_date /= 1000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use int_or_none
function.
youtube_dl/extractor/iheartradio.py
Outdated
# Remove analytics from stream URL (optional) | ||
streamUrl = item_info['streamUrl'] | ||
streamUrl = re.sub(r'(?:www\.)?podtrac\.com/pts/redirect\.[\w]*/', '', streamUrl) | ||
streamUrl = re.sub(r'chtbl\.com/track/[\w]*/', '', streamUrl) | ||
streamUrl = re.sub(r'\?source=[\w]*', '', streamUrl) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should done separatly from this PR(it's used in multiple services that are supported by youtube-dl, so it should be done in generic way).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't find any other mentions of "podtrac" or "chbtl" in other extractors. But yeah, I guess I shouldn't be doing that in this file.
f1f6ab5
to
38e12bf
Compare
@remitamine Thanks for co-operating. |
the description has no HTML tags now.
Please follow the guide below
x
into all the boxes [ ] relevant to your pull request (like that [x])Before submitting a pull request make sure you have:
In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under Unlicense. Check one of the following options:
What is the purpose of your pull request?
Description of your pull request and other information
(continued from #26394)
Added support for podcasts from https://iheart.com/podcast. Supports individual episodes, as well as entire series of podcasts, using their internal API.
Note that IHeartRadio offers other services besides podcasts, but those seem to be only available in the US and/or Canada. That includes things like music playlists, and actual radio stations. I did not include support for them because these are not available where I live, and frankly, I really don't care about those features.