Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Editorial pass, 1. to 1.1.2 (fixes #298) #334

Merged
merged 27 commits into from
Sep 6, 2023
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
'incorporating direct feedback from wide review'
  • Loading branch information
Robin Berjon committed Jul 18, 2023
commit 747c08fb6560f7a7b7f796c8b14e78adb319647b
51 changes: 25 additions & 26 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -592,17 +592,13 @@
best align with ethical Web values in Web [=contexts=] ([[?ETHICAL-WEB]], [[?Why-Privacy]]).

<dfn data-lt='information'>Information flows</dfn> as used in this document refer to information
exchanged or processed by [=actors=]. The information itself need not necessarily be
[=personal data=]. Disruptive or interruptive information flowing <em>to</em> a
person is in scope, as is [=de-identified=] [=data=] that can be used to manipulate people or that
was extracted by observing people's behaviour on a website.

[=Information flows=] need to be understood from more than one perspective: there is the flow of information
<em>about</em> a person (the subject) being processed or transmitted to any other [=actor=], and there is
the flow of information <em>towards</em> a person (the recipient). Recipients can have their privacy violated in multiple ways such as
unexpected shocking images, loud noises while they intend to sleep, manipulative information,
interruptive messages when their focus is on something else, or harassment when they seek
social interactions.
exchanged or processed by [=actors=]. Information flows need to be understood from more than one
perspective: there is the flow of information <em>about</em> a person (the subject) being processed
or transmitted to any other actor, and there is the flow of information <em>towards</em> a person
(the recipient). Recipients can have their privacy violated in multiple ways such as unexpected
shocking images, loud noises while they intend to sleep, manipulative information, interruptive
messages when their focus is on something else, or harassment when they seek social interactions.
(In some of these cases, the information may not be [=personal data=].)

On the Web, [=information flows=] may involve a wide variety of [=actors=] that are not always
recognizable or obvious to a user within a particular interaction. Visiting a website may involve
Expand Down Expand Up @@ -633,9 +629,9 @@

Affordances and interactions that decrease [=autonomy=] are known as <dfn data-lt="dark pattern|dark patterns">deceptive patterns</dfn> (or dark patterns).
A [=deceptive pattern=] does not have to be intentional ([[?Dark-Patterns]], [[?Dark-Pattern-Dark]]).

Because we are all subject to motivated reasoning, the design of defaults and affordances
that may impact [=autonomy=] should be the subject of independent scrutiny.
When building something that may impact [=autonomy=], it is important to review the product from
multiple independent perspectives to make sure that it does not introduce [=deceptive patterns=]
that may not be obvious to its creator.

Given the large volume of potential [=data=]-related decisions in today's data economy,
complete informational self-determination is impossible. This fact, however, should not be
Expand All @@ -649,23 +645,23 @@

Several kinds of mechanisms exist to enable [=people=] to control how they interact
with systems in the world. Mechanisms that increase the number of [=purposes=] for which
their [=data=] is being [=processed=] or the amount their [=data=] is [=processed=]
their [=data=] is being [=processed=] or the amount of their [=data=] that is [=processed=]
are referred to as [=opt-in=] or <dfn data-lt="opt in|opt-in">consent</dfn>. Mechanisms
that decrease this number of [=purposes=] or amount of [=processing=] are known as
that decrease this number of [=purposes=] or amount of [=data=] being [=processed=] are known as
darobin marked this conversation as resolved.
Show resolved Hide resolved
<dfn data-lt="opt out">opt-out</dfn>.

When deployed thoughtfully, these mechanisms can enhance [=people=]'s [=autonomy=]. Often,
however, they are used as a way to avoid putting in the difficult work of deciding which
types of [=processing=] are [=appropriate=] and which are not, offloading [=privacy labour=]
to the people using a system.

In specific cases, [=people=] should be able to [=consent=] to data sharing that would otherwise be restricted,
such as having their [=identity=] or reading history shared across contexts.
[=Actors=] need to take care that their users are *informed* when granting this [=consent=] and
*aware* enough about what's going on that they can know to revoke their consent
when they want to.
[=Consent=] is comparable to the general problem of permissions on the Web
platform. Both consent and permissions should be requested in a way that lets
In specific cases, [=people=] should be able to [=consent=] to data sharing that would
darobin marked this conversation as resolved.
Show resolved Hide resolved
otherwise be restricted, such as granting access to their pictures or geolocation.
[=Actors=] need to take care that their users are [*informed*](#consent-principles) when
granting this [=consent=] and *aware* enough about what's going on that they can know to
revoke their consent when they want to.
[=Consent=] to data processing and granting permissions to access APIs on the Web
platform are similar problems. Both consent and permissions should be requested in a way that lets
people delay or avoid answering if they're trying to do something else. If
either results in persistent data access, there should be an indicator that lets
people notice and that lets them turn off the access if it has lasted longer
Expand Down Expand Up @@ -741,7 +737,11 @@
of [=privacy=] in a given context can be
contested ([[?Privacy-Contested]]). This makes privacy a problem of collective action ([[?GKC-Privacy]]).
Group-level [=data processing=] may impact populations or individuals, including in
ways that [=people=] could not control even under the optimistic assumptions of [=consent=].
ways that [=people=] could not control even under the optimistic assumptions of [=consent=]. For instance,
it's possible that the only thing that a person is willing to reveal to a particular actor is that they
are part of a given group. However, other members of the same group may be interacting with the same
actor and revealing a lot more information, which can enable effective statistical inferences about
people who refrain from providing information about themselves.

What we consider is therefore not just the relation between the [=people=] who share data
and the [=actors=] that invite that sharing ([[?Relational-Turn]]), but also between the [=people=]
Expand Down Expand Up @@ -807,8 +807,7 @@
to ensure that "<i>broad testing and audit continues to be possible</i>" where
[=information flows=] and automated decisions are involved.

Such transparency can only function if there are strong rights
of access to data (including data
Such transparency can only function if there are strong rights of access to data (including data
derived from one's personal data) as well as mechanisms to explain the outcomes of automated
decisions.

Expand Down