Skip to content

Adds new public content metadata endpoints, allows for remote browsing of metadata#9474

Merged
rtibbles merged 8 commits intolearningequality:developfrom
rtibbles:public_contentnode
Jun 2, 2022
Merged

Adds new public content metadata endpoints, allows for remote browsing of metadata#9474
rtibbles merged 8 commits intolearningequality:developfrom
rtibbles:public_contentnode

Conversation

@rtibbles
Copy link
Member

@rtibbles rtibbles commented Jun 1, 2022

Summary

  • Cleans up all previously existing 'user data' specific endpoints for content node
  • Simplifies and adds etags to all content metadata endpoints
  • Adds a new v2 namespace for public content metadata endpoints
  • Expands the existing internal ChannelMetadataViewset to include all public data and uses it for the v2 endpoint
  • Adds public ContentNodeViewset and ContentNodeTreeViewset to allow remote browsing of public metadata
  • Adds baseurl query parameter handling to these three internal endpoints
  • This means that the internal endpoints can now proxy fetches to other Kolibri's public endpoints

References

Fixes #9381
Fixes #9380
Fixes #9390
Fixes #9385
Fixes #9382
Fixes #9391

Reviewer guidance

Test the three endpoints with another instance of Kolibri running (preferably from a different Kolibri home dir, so that different metadata will be returned).

Example URLs:

Channel:
http://127.0.0.1:8000/api/content/channel/?format=json&baseurl=http://127.0.0.1:8080/
Public:
http://127.0.0.1:8000/api/public/v2/channel/?format=json

ContentNode:
http://127.0.0.1:8000/api/content/contentnode/?format=json&max_results=25&baseurl=http://127.0.0.1:8080/
Public:
http://127.0.0.1:8000/api/public/v2/contentnode/?format=json&max_results=25

Tree:
http://127.0.0.1:8000/api/content/contentnode_tree/<topic_id>/?format=json&max_results=2&baseurl=http://127.0.0.1:8080/
Public:
http://127.0.0.1:8000/api/public/v2/contentnode_tree/<topic_id>/?format=json&max_results=2


Testing checklist

  • Contributor has fully tested the PR manually
  • If there are any front-end changes, before/after screenshots are included
  • Critical user journeys are covered by Gherkin stories
  • Critical and brittle code paths are covered by unit tests

PR process

  • PR has the correct target branch and milestone
  • PR has 'needs review' or 'work-in-progress' label
  • If PR is ready for review, a reviewer has been added. (Don't use 'Assignees')
  • If this is an important user-facing change, PR or related issue has a 'changelog' label
  • If this includes an internal dependency change, a link to the diff is provided

Reviewer checklist

  • Automated test coverage is satisfactory
  • PR is fully functional
  • PR has been tested for accessibility regressions
  • External dependency files were updated if necessary (yarn and pip)
  • Documentation is updated
  • Contributor is in AUTHORS.md

@rtibbles rtibbles added the TODO: needs review Waiting for review label Jun 1, 2022
@rtibbles rtibbles added this to the 0.16.0 milestone Jun 1, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Jun 1, 2022

@rtibbles rtibbles force-pushed the public_contentnode branch from 2202a72 to 3e40bb4 Compare June 1, 2022 21:41
Copy link
Member

@jredrejo jredrejo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good to me. I've tested it and haven't found any issue.

However I'd love to know the purpose of this new proxy api. Obviously it's incompatible with previous versions of kolibri, so I'd like to be sure this is not thought to interact with old servers at any point.

@rtibbles
Copy link
Member Author

rtibbles commented Jun 2, 2022

However I'd love to know the purpose of this new proxy api

The primary purpose is to allow remote browsing of other Kolibri's libraries, so that the resources can be directly interacted with and previewed, without having to import them first. In an ideal world, we would make this work for any previous version of Kolibri as well, but my sense is that the complexity we would have to introduce to ensure that would be high.

If we do change our content schema in the future, we would have to provide some sort of translation layer (in the same way we do now for metadata import) to map from an older schema to a newer one. I am in two minds about whether we will want to do this, but that's the reason for using the versioned public APIs so we can have a clear signal when we need to do that mapping if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment