Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tracking] handle 90% user facing resources #961

Closed
bryk opened this issue Jun 27, 2016 · 13 comments
Closed

[tracking] handle 90% user facing resources #961

bryk opened this issue Jun 27, 2016 · 13 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@bryk
Copy link
Contributor

bryk commented Jun 27, 2016

User story: As a user I want to be able to access my cluster using only UI, only CLI or any mix of CLI and UI. I want to see my entire application architecture through the UI

Goals: Show 90% user-facing Kubernetes resources and allow CRUD operations on them. 10% slack is for long tail corner cases and features that are not for browsers.

Work estimate: 2 engs for a quarter.

@danielromlein
Copy link
Contributor

@bryk RE the email discussion this week, do you want to revise this to focus on monitoring / troubleshooting? If I'm understanding correctly, we're saying it makes more sense to invest in this area of the Dashboard as that's its primary desired functionality by users (and, being visual, its strength), rather than striving for functional parity with kubectl.

@bryk bryk changed the title [tracking] reach 90% feature parity with kubectl [tracking] handle 90% user facing resources Jul 1, 2016
@bryk
Copy link
Contributor Author

bryk commented Jul 1, 2016

@romlein Yes, as mentioned there "monitoring and troubleshooting" is our primary focus for the quarter. We were unfortunate with naming the focus points. So I fixed the issue. This issue is now "handle 90% user facing resources" and the #962 stays the same.

But in fact, both progress us to the goal of better troubleshooting and monitoring. This is because you cannot debug your system if you dont see it (or see only a part of it).

Does this make sense?

@Lukenickerson
Copy link
Contributor

@bryk - Do you have a list of which resources are considered user-facing?

Is there a method for tracking parity in the dashboard currently? I'm thinking a matrix of Resources Types and Operations could be useful for this. Here's a possible template: 100% parity would mean we have every cell either checked-off if it can be done in the dashboard, or "N/A" if the resource doesn't support a particular operation.

@bryk
Copy link
Contributor Author

bryk commented Jul 12, 2016

@Lukenickerson We define user-facing resources as the resources that you create/read/delete/update (CRUD) in order to run applications in Kubernetes cluster.

Re tracking this: @floreks created a list of the resources in #651 and that's the place where we enumerate them to see what's done.

Re feature parity: I think that our current aim is to support generic CRUD on the resources. Custom operations (e.g., rolling update) is likely a thing for next quarter. The spreadsheet you created is very nice, but it should be more like a list. This is because you have CRUD operations on all resources and only a few custom verb operations on individual ones. E.g., you can proxy only to a pod. How about we track it in an issue similar to @floreks' one and have table-list of resources + operations, like:

Resource Operation Operation Comment
Replica Set CRUD [x] Rolling Update []
Foo CRUD [] bar
... ... ...

@bryk
Copy link
Contributor Author

bryk commented Jul 12, 2016

@Lukenickerson And yes, we can gather the team to produce this table, once we figure out the format :)

@Lukenickerson
Copy link
Contributor

@bryk I think the vertical format you suggest will work well. What's the best way of filling it out and linking it with kubectl? The difficult part for me is tying together the resources with the operations. (lists of both)

@bryk
Copy link
Contributor Author

bryk commented Jul 28, 2016

What's the best way of filling it out and linking it with kubectl?

Go through resources, read the docs. I think this is the best way.

What's also important here is relationships between resources. For now it is even more important than the actions you can take on resources. For example, a pod's container can link to a config map, secret, persistent volume claim, etc. Similarly, replica set can link to child pods or parent deployments. The graph here is very large.

How about we create such graph where nodes are resources and arrows are differend kinds of relationships? Then we mark what is already implemented and what should be done. This should be a good estimate of what more work we need to finish this. Also, this is a nice exercise to understand what's going on with K8s. Can you take care of initiating this @Lukenickerson?

@Lukenickerson
Copy link
Contributor

@bryk : Are you thinking of something along the lines of the previous graph visualization? Did that end up being useful for people? I think I might start with doing some static diagramming, and then see if I can convert the diagram-like visualization into something interactive. If there's not already an issue for this, I can start one for tracking purposes.

@bryk
Copy link
Contributor Author

bryk commented Jul 29, 2016

Are you thinking of something along the lines of the previous graph visualization? Did that end up being useful for people?

I didn't mean creating it. I meant something like: take a piece of paper. Draw circles for all resources. Then connect the resources with arrows and annotate arrows by the relationship type (e.g., an arrow from a pod to a secret: "can mound as a volume", "can use in env vars"). Then you can scan this and post here so that we can use it for tracking :) It is also a nice exercise for you to understand what's going on in the K8s world.
Does this make sense?

The graph visualizations were not useful, btw. They very hard to understand and to use in real scenarios.

@maciaszczykm maciaszczykm added priority/P1 kind/feature Categorizes issue or PR as related to a new feature. labels Aug 9, 2016
@bryk bryk added this to the 1.4 milestone Aug 19, 2016
@bryk bryk removed this from the 1.4 milestone Oct 24, 2016
@bryk
Copy link
Contributor Author

bryk commented Oct 27, 2016

I moved tracking of this feature to the kubernetes/features repo. That's the place we should be putting our large projects.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 23, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 22, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@maciaszczykm maciaszczykm reopened this Feb 21, 2018
@maciaszczykm maciaszczykm added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Feb 27, 2018
@maciaszczykm maciaszczykm removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P1 labels Nov 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

6 participants