Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update energy-efficent-framework.md #320

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

2BlackCoffees
Copy link

Added additional considerations in the description

Title

Title of the pattern

Version

Designation of iteration on the pattern. This will initially be assigned by the patterns working group

Submitted By

The name of the person(s) submitting the pattern

Published Date

The date this version of the pattern is published. This will be provided by the patterns working group upon approval

Intent

Subtitle describing what this pattern is expected to do

Tags

Pre-defined list of tags which might apply to the pattern (e.g. Cloud, Web)

---
tags:
 - cloud
 - web
---

Problem

What is the problem this pattern is solving

Solution

How will this patter solve the problem

SCI Impact

How will this pattern affect an SCI score of an application and why

`SCI = (E * I) + M per R`

Assumptions

What are the assumptions being made

Pros & Cons

Discussion section for pros and cons of this pattern

- **PRO**:
- **CON**:

Added additional considerations in the description

Signed-off-by: Jean-Phi <jpulpiano3@gmail.com>

Take into accoutn that in the Cloud for example the processing time of non CPU intensive applications is mostly spent idling because of the strong interservice dependencies that are network related (In other words a request through the network is typically 10000+ times slower than a CPU operation). 

Libraries like TensorFlow or PyTorch allow to leverage GPUs easily that would have a different consumption scheme compared to CPUs, TPU for TensorFlow typically outperform as well CPUs: All these points need to be properly analyzed before selecting a language that might bring drawbacks in terms of staffing skilled people.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Libraries like TensorFlow or PyTorch allow to leverage GPUs easily that would have a different consumption scheme compared to CPUs, TPU for TensorFlow typically outperform as well CPUs: All these points need to be properly analyzed before selecting a language that might bring drawbacks in terms of staffing skilled people.
Libraries like TensorFlow or PyTorch allow easy GPU utilization, which has a different consumption pattern compared to CPUs. TPUs (Tensor Processing Units) in TensorFlow typically outperform CPUs. All these points need proper analysis before selecting a language, as it may impact staffing skilled personnel.

@@ -18,6 +18,12 @@ Training an AI model implies a significant carbon footprint. The underlying fram

Typically, AI/ML frameworks built on languages like C/C++ are more energy efficient than those built on other programming languages.

Following provides an interesting view on a [normalized analysis regarding languages and their energy footprint(https://sites.google.com/view/energy-efficiency-languages/results?authuser=0)].

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Following provides an interesting view on a [normalized analysis regarding languages and their energy footprint(https://sites.google.com/view/energy-efficiency-languages/results?authuser=0)].
Following provides an interesting view on a [normalized analysis regarding languages and their energy footprint](https://sites.google.com/view/energy-efficiency-languages/results?authuser=0).

@@ -18,6 +18,12 @@ Training an AI model implies a significant carbon footprint. The underlying fram

Typically, AI/ML frameworks built on languages like C/C++ are more energy efficient than those built on other programming languages.

Following provides an interesting view on a [normalized analysis regarding languages and their energy footprint(https://sites.google.com/view/energy-efficiency-languages/results?authuser=0)].

Take into accoutn that in the Cloud for example the processing time of non CPU intensive applications is mostly spent idling because of the strong interservice dependencies that are network related (In other words a request through the network is typically 10000+ times slower than a CPU operation). 

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Take into accoutn that in the Cloud for example the processing time of non CPU intensive applications is mostly spent idling because of the strong interservice dependencies that are network related (In other words a request through the network is typically 10000+ times slower than a CPU operation). 
In the cloud, the processing time for non-CPU-intensive applications is often spent idling due to strong inter-service dependencies that are network-related. In other words, a request over the network is typically more than 10,000 times slower than a CPU operation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
initial review proposed pattern An idea for a new pattern to submit
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants