Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supply chains can be trusted - expand document to consider this possibility #83

Closed
jwrosewell opened this issue Jun 4, 2020 · 33 comments
Assignees
Labels

Comments

@jwrosewell
Copy link

Section 4.4 “Explicitly restrict the feature to first party origins” does not consider the possibility that a person can trust a first party and all the suppliers that first party has chosen to work with.

In no other industry is the burden placed on people to understand the entire supply chain of a supplier. People purchasing automobiles are not required to know who supplies the brakes or airbags despite these components being essential to their safety. Instead they trust the automobile manufacturer to choose these suppliers.

@hober hober added the question label Jun 11, 2020
@torgo torgo self-assigned this Jul 6, 2020
@torgo
Copy link
Member

torgo commented Jul 6, 2020

In no other industry is the burden placed on people to understand the entire supply chain of a supplier.

I am not sure that is true. Look at the rise of organic food, or fair-trade or other similar schemes. Or indeed look at the transparency reports that companies like Apple are producing on their supply chain. Consumers are increasingly putting pressure on ethical supply chain.

@jwrosewell
Copy link
Author

Focus on "ethical supply chain" is different to "understand the entire supply chain".

When I buy coffee with a fair trade mark of approval I know I'm buying coffee that fair trade have certified as operating to an ethical set of principles which I can easily trust. In this situation I know at point of sale that the entire supply chain of my coffee supplier can be trusted. I do not need to understand the entire supply chain of the coffee brand to trust the coffee brand.

Expanding this concept to the web and framing privacy and security to facilitate such a concept is exactly the issue I'm raising and seeking modifications to support.

@annevk
Copy link
Member

annevk commented Jul 7, 2020

Through Permissions Policy the first party can delegate authority to third parties, but only within the scope of the first party. This should probably clarify that in general features that are restricted to first parties should be able to be delegated in this matter since a first party that trusts its third parties could also postMessage() the relevant data with them.

It might still create tricky cases if there is UI involved however and we cannot "blame" the first party for such requests (e.g., WebAuthn) so in general the recommendation to start with first parties makes a lot of sense, especially when there is UI involved.

@jwrosewell
Copy link
Author

General observation. As a newbie to the W3C, who is trying very hard to follow what is going on, the "mosaic" of documents makes understanding a given proposal very hard. This is especially true when the versions change independently of one another and alter the meaning of dependent documents.

One way to reduce this complexity in this instance is for the questionnaire to be a self-contained document.

In relation to trust and the previous comment. Suppose a group of publishers need to collaborate with one another to achieve scale? A use case such as fraud prevention requires this. In this situation the supply chain partner could not be isolated to a single third party. The questionnaire should recognise this possibility and not exclude it.

By restricting such possibilities people will need to surrender their personal data more frequently to log into multiple providers and then share IDs between these providers. This would result in more personal data being provided to more parties. It will certainly result in frustrating friction.

A way to limit this is via single sign on services. In practice such services are run by large companies with the scale to capture lots of first party data. These larger companies then become large as it's impossible for small companies to compete. The questionairre needs to recognise these broader set of issues which are all related back to trust choices.

@pes10k
Copy link
Contributor

pes10k commented Jul 8, 2020

I don't think arguing by analogy is useful here. The reason for the emphasis on first parties as the trust / security / privacy boundary is because anything further is impossible to enforce and quickly collapses to either "impossible burden on users" or "everyone gets everything".

On the web users are at risk for having extremely sensitive data (their names, behaviors, financial information, etc) captured and exfiltrated by extremely large numbers of parties, most of which are unknown to the user, and many of which are hostile to the user's interests.

Since it's impossible for users to understand and categorize the trustfulness of all of these parties, we only have the least worst option; trying to limit the number of parties the user needs to trust to one, the top level / first party.

@jwrosewell
Copy link
Author

Is there a policy within the W3C which states either "impossible burden on users" or "everyone gets everything" are explicitly invalid? If not, where are the boundaries defined? If so how would such a policy be reconciled against antitrust issues where first parties have an competition advantage over third parties? Whether argued via analogy or in other ways these are important questions.

On the web users are at risk for having extremely sensitive data (their names, behaviors, financial information, etc) captured and exfiltrated by extremely large numbers of parties, most of which are unknown to the user, and many of which are hostile to the user's interests.

What evidence is there to support this? In Europe at least the largest fines for breach of privacy regulation have been received by Google. Presumably at times Google would be a web browser, first party and third party according to this document.

Since it's impossible for users to understand and categorize the trustfulness of all of these parties, we only have the least worst option; trying to limit the number of parties the user needs to trust to one, the top level / first party.

Why is it impossible? Has any work been done to conclude this? It is perfectly possible for people’s financial information and names to be transferred between parties that they did not explicitly consent to. People do it every day when making payments. Perhaps this issue would benefit from a deeper understanding of the practices deployed in other industries.

In any case the document needs to acknowledge the importance of supply chains and the benefits they provide. See the suggestions offered in this comment.

@torgo
Copy link
Member

torgo commented Aug 6, 2020

Let's try to focus this discussion on specific issues having to do with the document in question - the security & privacy questionnaire. We have already established that the statement "supply chains can be trusted" (on its own) is false. @jwrosewell do you have a specific proposed text change you would like to see in the document that we can debeate? Otherwise I will close this issue.

@joshuakoran
Copy link

@torgo - Could you please reference the support for the statement that publishers' "supply chains cannot be trusted"? Most people tend to trust the supply chains of the businesses they frequent--as open markets require this delegation of trust. I'm wondering where the W3C has previously stated its default position is to the contrary?

@torgo
Copy link
Member

torgo commented Aug 7, 2020

As I already pointed out, if this were true then there would not be a general consumer push for supply chain transparency (which there clearly is). If you're looking for an example closer to web advertising, please have a read of @darobin's great article about what the NY Times is doing to "improve privacy in marketing" including limiting the use of 3rd party tracking (because they do not trust the supply chain of the advertising ecosystem). This issue is now closed.

@torgo torgo closed this as completed Aug 7, 2020
@darobin
Copy link
Member

darobin commented Aug 7, 2020

In support of this issue being closed: publishers, especially the smaller and more embattled ones, are under tremendous pressure to accept shady supply chains. Also, supply chains are completely opaque. In a recent ISBA study, despite tremendous effort a large group of blue chip brands and premium publishers could not account for over 15% of the spend. That's the money, they couldn't even begin the trace the data. Here is the supply chain for Snopes, a high-profile publisher that has spent a lot of energy trying to clean up their act.

There is no equivalent of an organic or fair trade label for adtech. It might have been useful but the industry lacks the governance to establish one and now that ship has sailed.

There is a policy in W3C that puts users first: the Priority of Constituencies. This requires browsers to abide by users' privacy preferences, which are very well established, and to prevent both "impossible burden on users" or "everyone gets everything".

@jwrosewell
Copy link
Author

jwrosewell commented Aug 10, 2020

@darobin I am happy to hear that you agree publishers need to work with supply chains to fund their businesses, even ones as large as the New York Times. We also agree that we should advance the standards to help publishers choose clean “fair trade” supply chains, rather than the those that are “shady”.

The later comments on this issue moved towards the use case for advertising. There are many other supply chains which people can trust including financial transaction such as payments or buying and selling stocks and shares. On the web almost every website has a supply chain that people trust daily including service providers, hosting, and software companies. However to stick with the example you have chosen here is some analyse of the information you provided which indicate the issue has not been resolved.

The conclusion of the ISBA study states: “All market participants must contribute to industry evolution. This includes: a shared understanding and application of 'transparency'; contractual arrangements with standardised definitions; clear and consistent protocols for sharing data; careful monitoring of log level reports; supporting industry initiatives to investigate any unattributable costs; and implementing robust governance and compliance programmes.”

Since you cited this study, I would hope we can both agree that we need better auditability, accountability, choice for people and publishers that can be achieved through better standards of interoperability. This ensures the smaller publishers who must rely on vendors have a choice as to which ones they can work with.

As you note, the study does have data quality issues for the .6% of UK digital media spend it analyzed (5% of the total UK spend was in the sample, and only 12% of that could be matched through the supply chain).
• The first issue is that 9 out of the 15 months marketers were charged a LOWER DSP fee than that required by contract (see page 8). They cite the reason behind this as due to issues with “reconciliations” at the impression level for the sample that they could match. The unweighted delta between actual and contracted is about 5% on average, with a spread of +/- 3% or 6%.
• The second issue is that the SSP’s analyzed showed an even greater unweighted discrepancy of 28% HIGHER fee that contracted for, with an even greater spread between 12.5% and -2% of publisher fees, which they equate to 22% or -4% of advertiser spend. Thus these discrepancies of their “matched” impressions fully accounts for much of 15% “unaccounted” for revenue.
• What they fail to mention is that such a large discrepancy is actually expected. They fail to discuss that the SSP is counting on won auctions, while the advertiser is counting on MRC accredited rendered impressions, which according to the analysis in 2016 was expected to be a 15% delta. https://www.admonsters.com/opspov-preparing-rendered-impression-revolution-updated

This is why UK CMA suggested that one remedy to ensuring a level playing field in digital markets would be to have improved tracking across the supply chain. For example, standardized ad request IDs that flow through the entire supply chain would easily improve the measurement of where the issues causing discrepancies are occurring and how publishers can increase the load rate (render / requests) of their inventory.

Given large publishers, one of which is the leading browser implementer, have been found by regulators to be abusing their dominant position and have already had to pay billions in fines, many of these for privacy related violations, ideally you would also agree that bad actors can exist in every stakeholder group. However, we should not discriminate against smaller publishers for the bad acts of larger publishers or the fact that they have challenges separating good actors from bad in choosing the supply chain partners required to operate their business.

We also agree that given the priorities of constituencies that put people ahead of authors/publishers and both of those ahead of implementer/browsers, that the browsers should not be unilaterally making decisions on behalf of either group.

Given the above, hopefully we can reopen this issue to address the specific concerns I raised about the current text.

@torgo
Copy link
Member

torgo commented Aug 10, 2020

@jwrosewell nothing you wrote above prompts me to reopen this issue. You're clouding the issue. If you have specific text changes that you would like to see to the document, feel free to suggest them.

@jwrosewell
Copy link
Author

@torgo all proposers of new technical standards and changes are directed towards this document. It is an important document for just some of many important issues the W3C purpose and vision encapsulate. The document needs to accurately reflect the vision and values of the W3C and guide proposers. As it stands the document's premise is the issue, rather than a single sentance or paragraph that can be addressed via specific text changes alone.

On this issue a balanced debate is needed to find a way forward. As I understand the W3C process this debate happens synchronously via meetings, and asychronously via GitHub issues. In the case of this document W3C members and other stakeholders are not able to join the TAG debate as this is closed to TAG members. This leaves the GitHub issue threads for non members, and the PING meetings for W3C members.

You have closed the issue on the basis of the "fair-trade" scheme analogy and a consumer push towards supply chain transparency, neither of which counter the position that supply chains can be trusted. They can in both the examples you provide. What you are raising are good points about the conditions underwhich supply chains can be trusted and this should be part of the debate and alterations to the document. In the case of "fair-trade" an audit and certification process. You, or other commentators, have advanced positions that do nothing to support the document's premise that it is the role of W3C technical standards to interfere in people's trust choices by defining first and third parties and applying differential trust assumptions to each category. I observe the W3C's "One Web" vision, order of consitituents and ethical web principles all point away from such a position.

I'm merely assessing the document as presented and from the perspective of someone who will likely be making proposals for technical standards in the future who wishes to understand how such proposals will be assessed against the purpose and vision of the W3C which I hope we all agree.

@pes10k
Copy link
Contributor

pes10k commented Aug 10, 2020

Speaking just for myself, I dont think there is any value in arguing by analogy in a standards document. As i mentioned in #83 (comment), we know the present system harms user privacy, relentlessly, and attempting to limit identifiability to the first party is an approach we think is likely to be achievable and user desirable.

In order to make this conversation productive, what specific text changes would you like in the document? What seems unclear (again, w/o resorting to analogy)? The aims? The techniques?

@jwrosewell
Copy link
Author

As requested I have made a number of proposed changes ahead of the PING meeting next week. See pull request.

#94

@jwrosewell
Copy link
Author

Texas vs Google -

"Google actively coordinates with its competitors when it comes to privacy," the suit says. "Of course, effective competition is concerned about both price and quality, and the fact that Google coordinates with its competitors on the quality metric of privacy - one might call it privacy fixing - underscores Google's selective promotion of privacy concerns only when doing so facilitates its efforts to exclude competition".

Closing this issue is an example of "privacy fixing".

@wseltzer
Copy link
Contributor

@jwrosewell Allegations in a complaint are not proof, so please stop citing them as such. I don't believe they have any place in these discussions.

@darobin
Copy link
Member

darobin commented Dec 24, 2020

And if the complaint is true, it alleges that Google and Facebook made a deal to collude to run the ad market, with noted consumer harm in price, diversity of offering, and privacy. Both of these vendors are near universal in publishers' supply chains. If the allegations are true, it is evident from it that publishers really can't be trusted, under the current data broadcasting regime, to set up trustworthy supply chains.

This matches IBSA's findings that we can't track huge chunks of money through this supply chain, despite it being excludable, the idea that anyone can track data, which is non-excudable, will remain science fiction until we fix the supply chain to work well with the Web's architecture.

@jwrosewell
Copy link
Author

@darobin I agree that Google are near universal in publisher supply chains. I think this is also true of the supply chains of all web participants.

I observe that the New York Times web site when accessed from the UK includes resources from the following domains owned by Google.

doubleclick.net
gstatic.com
google.com
googletagmanager.com
google-analytics.com

When accessed from Chrome all these domains, irrespective of any user preferences or publish preferences, transmit a pseudonymous identifier to Google in the X-Client-Data header. It is precisely identifiers such as these that Google among others seek to remove from their competitors.

These domains relate to tag management, news syndication, advertising, analytics and fonts. Based on an analysis of the included resources in the web page Google are the primary supplier to the New York Times web page. I hope we can agree that an organisation like the New York Times will have conducted a thorough analysis of its suppliers, particularly in relation to the matter of privacy, and are shocked by the evidence in the Texas vs Google claim.

The ISBA report highlights the need for solutions that support transparency, common definitions, standardised T&Cs, data sharing, and audit. The report does not conclude that supply chains cannot be trust and does not support the closure of this issue by @dtorgo or the prevailing position of TAG.

As this document is refined in the coming months considering the findings of the ISBA report should be a priority. I expect we will start to see solutions advanced that will ensure all participants can be held to account so that bad acts can be easily identified and bad actors sanctioned. As this document stands such solutions will be considered in a negative light when evaluated against this questionnaire. As such the questionnaire needs to be modified to support objective evaluation.

@jwrosewell
Copy link
Author

See my comment against the W3C Process for a response to @wseltzer comment.

@darobin
Copy link
Member

darobin commented Dec 24, 2020

"We can't figure out where the money goes but trust me this supply chain is totally trustworthy" is not a position I thought I would ever hear anyone defend but it's 2020 I guess.

The TAG's work is meant to be reality-based. If and when there is evidence of trustworthy supply chains this issue will certainly be worth looking at. In the meantime I don't think that lobbying to reduce user security is a fruitful use of anyone's time.

@joshuakoran
Copy link

joshuakoran commented Dec 24, 2020

Hopefully we do not need to wait for the outcome of the current proceedings to agree on the following points:

  1. While free will exists, good people can sometimes do bad acts.
  2. Bad Actors exist within both small organizations and large vertically integrated ones.
  3. Small organizations, by definition, must rely more on supply chains than vertically integrate ones.
  4. Impairing or eliminating organizations’ access to supply chains thus disproportionately impacts smaller organizations ability to operate their business.
  5. Thus, standards that discriminate against smaller organizations would be against the values of the W3C and such technologies should be out of scope for W3C discussions.

Hopefully we also can also agree that given the desire to minimize bandwidth utilization and the lack of processing power on many end-user client devices, much of modern web experiences rely on server-side processing. Thus, improving the transparency and accountability of this server-side processing, rather than impairing it, seems like a good path forward to improving the web for all.

@darobin
Copy link
Member

darobin commented Dec 25, 2020

@joshuakoran Giving random power to third parties puts smaller companies at a disadvantage because they can't commit to the level of defensive capabilities that larger ones deploy. Under the status quo that you are so passionately defending, even large, well-resourced publishers struggle to keep up with third-party misbehaviour, too often intervening after the fact.

I also haven't noticed that this status quo is particularly good at preventing dominance from large companies. It's unclear why you'd want to defend it if you want to change that.

@joshuakoran
Copy link

joshuakoran commented Dec 25, 2020

@darobin I think you misread my post. I agree that large publishers, such as your own, struggle to detect Bad Actors. I also agree with regulators’ findings of information asymmetries that large organizations have over smaller organizations in matching content to people to improve web experiences for people or for the marketers that often fund their access.

This is why I am advocating for improved transparency and accountability, rather than defending Bad Actors, whom we would both want to bring to justice.

My post highlights that impairing or eliminating the supply chains that smaller organizations rely upon to operate their business would be a step backward for a competitive marketplace.

Do you specifically disagree with any of my numbered points?

@darobin
Copy link
Member

darobin commented Dec 25, 2020

I don't believe I have misread your position that maintaining the status quo in which third parties are granted indiscriminate access is somehow good for small companies. That position is simply not borne out by the facts.

Promises of increased transparency and accountability have been made countless times over the past two decades. All there is to show for it is a series CYA initiatives like AdChoices or the TCF, or claims that "transparency and choice" somehow match anyone's expectations of respect and dignity. Don't get me wrong: you wish to defend the status quo and I respect you having that position. But given that it's the architecture that has failed to provide a sustainable business model for small publishers and that has given us massive fraud, failed privacy, and huge market failures please understand that the burden of proof that it is worth maintaining is very much on those who wish to keep it. It'll take more than promises.

@joshuakoran
Copy link

joshuakoran commented Dec 25, 2020

@darobin I think you do need to re-read my posts as I'm advocating for improvements over current state, rather than "defending" against change.

We can agree that there are bad actors in the world.
We can also agree that if the past initiatives were adequate, we would not be having this conversation.

You seem to be avoiding the question as to whether you disagree with any of my points and if so, which ones.

I hope you want to improve transparency and accountability.

Do you believe we should protect the ability for small organizations that must rely on supply chains to have choice?

@darobin
Copy link
Member

darobin commented Dec 25, 2020

As I've pointed out, your numbered points in defence of the status quo simply do not match reality. There is no nothing in here to restrain the "ability for small organizations to rely on supply chains to have choice." This simply addresses one bad way of supporting supply chains that has proved bad for users and bad for smaller organisations.

@joshuakoran
Copy link

joshuakoran commented Dec 26, 2020

@darobin, to be clear, I am defending the Open Web, which overwhelmingly consists of small publishers.

As described above, small publishers, by definition, must rely more on supply chains than vertically integrated large publishers and Walled Gardens.

As your company stated when launching your audience profiling and targeting products--not all publishers have the scale or resources to build out their own first-party data sets:

“This can only work because we have 6 million subscribers and millions more registered users that we can identify and because we have a breadth of content,” explained Allison Murphy, SVP of ad innovation.
“While a differentiator – and I’m thrilled about it – this isn’t a path available for every publisher,” she added, “especially not local [ones] who don’t have the scale of resources for building from scratch.”

-- Allison Murphy, Senior Vice President of Ad Innovation, The New York Times
https://www.warc.com/newsandopinion/news/publisher-shift-from-cookies-gathers-pace/43634

Thus this fact is not in dispute.

I respect your freedom of speech to argue on behalf of Walled Gardens. However, I hope we can agree your rights do not extend to impair the ability of your rivals to operate their businesses.

Thus, the question before us is whether we will think the W3C ought to issue standards and policies that would foreclose the ability of small publishers to compete with larger ones. I for one would not support this. I sincerely hope you would not either.

Does this mean we need to defend the past or that no change is needed? Quite the contrary. I am recommending we improve transparency and accountability, so that the very bad acts you highlight can be more easily detected and bad actors brought to justice.

I hope you too support these aims.

@darobin
Copy link
Member

darobin commented Dec 26, 2020

@joshuakoran It's very difficult to have this conversation when you keep bringing it back to your own industry's problem du jour and keep relying on adtech-specific lingo like using "walled garden" to mean "does not sell user data" instead of its usual meaning.

We have to lookat the whole platform. Making powerful features available identically to first parties and to random ads is just a bad practice. Building a supply chain through third-party code injection is less safe than with auditable first-party components. The fact that this would need to be reiterated simply beggars belief.

You claim to have some magic sauce that somehow makes third-party code injection safe. Great! Show us the goods. It's a strong claim that requires strong evidence. So far I recall the same being promised by WAC, by BONDI, and by the IAB. Talk is cheap and none of those promises ever materialised. If you have that evidence just share it and we can talk. You're asking for a simple, commonsense security assessment question to be changed based on speculation and promises. Why would anyone believe that? If what you claim is true, all you have to do is show us. Making third parties trustworthy is a hard problem we all want fixed.

@joshuakoran
Copy link

joshuakoran commented Dec 28, 2020

@darobin I agree we seem to be having difficulties communicating. Nevertheless I believe we do agree on a number of points.
We do agree that auditability is a key desirable feature that we can and should work to improve.

I also agree with your recommendations that when determining guidelines, standards and technology we should look at the system as a whole. We also agree that in defining guidelines, standards and technology, we should look beyond just advertising use cases that support people’s access to most web publishers.

Towards this end, my points above regarding protecting small publishers’ access to supply chains are not limited to advertising use cases. Publishers need more than ad-funded revenue to operate their business. Many publishers rely on supply chain partners (“third-parties”) to provide authentication services, payment and credit card verification services, fraud detection services, web site analytics services, content hosting services, content management services, A/B testing services, onsite search services, among many others.

“Making powerful features available” to all publishers seems to be preferable than limiting this power to only the largest publishers. Policies, guidelines and technology that impair the ability of smaller publishers to operate their business by reducing the accuracy or timeliness of the data that feeds these services would clearly favor lager, vertically-integrated publishers.

I truly hope you are not advocating we reduce competition or raise barriers to entry for new market entrants. Hopefully we can update this document to ensure that new proposals to do not do either.

To address your question as to how I meant “walled gardens,” I did not mean to limit this term to advertising use cases. I was using the phrase the same way as Webopedia or Wikipedia. Webopedia defines it as:

a browsing environment that controls the information and Web sites the user is able to access.
https://www.webopedia.com/definitions/walled-garden

Wikopedia defines "walled garden" as a “closed platform” that

restricts convenient access to non-approved applicants or content. This is in contrast to an open platform, wherein consumers generally have unrestricted access to applications and content.
https://en.wikipedia.org/w/index.php?title=Walled_garden_(technology)

Moreover, I think this is the same meaning that has appeared in the New York Time’s articles that date back over a decade:

  1. The Death of the Open Web - https://www.nytimes.com/2010/05/23/magazine/23FOB-medium-t.html

In the migration of dissenters from the “open” Web to pricey and secluded apps, we’re witnessing urban decentralization, suburbanization and the online equivalent of white flight.…

In spite of a growing consensus about the dangers of Web vertigo and the importance of curation, there were surprisingly few “walled gardens” online — like the one Facebook purports to (but does not really) represent.…

The far more significant development, however, is that many people are on their way to quitting the open Web entirely. That’s what the 50 million or so users of the iPhone and iPad are in position to do. By choosing machines that come to life only when tricked out with apps from the App Store, users of Apple’s radical mobile devices increasingly commit themselves to a more remote and inevitably antagonistic relationship with the Web.

  1. Apples and the Desire for Control - https://bits.blogs.nytimes.com/2012/11/19/apple-and-the-desire-for-control/

The big Internet companies — Apple, Amazon, Google — are all pursuing a “walled garden” approach, where they hope to do so much for their customers that they will never leave (Maybe it should be called a “Hotel California” strategy). Internet policy experts did not like walled gardens when America Online built one in the 1990s, and they do not like the prospect of another one triumphing now.

A private universe, they suggest, will ultimately prove a dead-end for innovation — and possibly for tech-sector jobs as well.

“If someone else controls the distribution of your work, and the pricing, then you don’t have a company, you have an affiliate,” said Brewster Kahle, founder of the Internet Archive, a nonprofit digital library.

If the goal is a diverse and truly free market for all tech entrepreneurs, “you won’t get there from here,” he added. “But if you want it to be a mall, we’re well on the way.”

… Another start-up, Fandor, a subscription service for independent films, had a different sort of problem. Fandor’s app has been downloaded over 100,000 times from Apple. Naturally, it wants to market to these fans directly. But when members subscribe to the service through Apple, it cannot. Apple considers the e-mail addresses its property.

“The more Apple can control, the more it will control,”said Dan Aronson, Fandor’s chief executive. “It’s nice to be king.”

Apple declined to comment on Tawkon or Fandor, but in general has maintained that tight control is essential for ensuring the quality of customers’ experiences. The company screens out potentially buggy or completely ridiculous apps, for example. And Apple says the vast majority of apps — more than 95 percent — are accepted on the first submission.

Mr. Kahle is not reassured. “Apple is creating a feeder system where they get to learn everyone else’s business model and then get to compete with them,” he said. “The lock-down is the biggest issue in the tech industry. There is a difference between the rule of law and the rule of mall police.”

  1. Facebook offers life raft but publishers are way - https://www.nytimes.com/2014/10/27/business/media/facebook-offers-life-raft-but-publishers-are-wary.html

One possibility it mentioned was for publishers to simply send pages to Facebook that would live inside the social network’s mobile app and be hosted by its servers; that way, they would load quickly with ads that Facebook sells. The revenue would be shared.

That kind of wholesale transfer of content sends a cold, dark chill down the collective spine of publishers, both traditional and digital insurgents alike. If Facebook’s mobile app hosted publishers’ pages, the relationship with customers, most of the data about what they did and the reading experience would all belong to the platform. Media companies would essentially be serfs in a kingdom that Facebook owns.

It is a measure of Facebook’s growing power in digital realms that when I called around about those rumors, no one wanted to talk. Well, let me revise that: Many wanted to talk, almost endlessly, about how terrible some of the possible changes would be for producers of original content, but not if I was going to indicate their place of employment. (Many had signed confidentiality agreements, so there’s that as well.)...

Given the amount of leverage Facebook has, many publishers are worried that what has been a listening tour could become a telling tour, in which Facebook dictates terms because it drives so much traffic. (Amazon’s dominance in the book business comes to mind.)...

It reminds me very much of those times when other digital behemoths tried to persuade content providers into letting them host the publishers’ content. In the early days, when AOL was dominant, the service preyed on the publishers’ fear that if they didn’t put their content inside the walled garden of AOL, their content would be invisible. That strategy benefited AOL in the short run, but no one prospered in the long run.

And I remember a visit to Google when Sergey Brin, a founder of the company, and some of his colleagues talked about how clunky most news web pages were — sound familiar? — and offered to host content with quicker load times and a revenue share. That went nowhere fast.

Once companies reach a certain scale online, they have a tendency to decide that while they love the Internet, they would like a better version. And, oh, by the way, they should run it. (All considered, Apple has already pulled off that trick, creating a private enclave of apps that it controls.)

@jwrosewell
Copy link
Author

I believe we all can agree the following.

  1. Consensus at the W3C on this document does not exist in light of the comments made here and more generally.
  2. The issues this thread discusses are ones law makers and regulators from multiple countries are actively investigating and interested in addressing.

All industries have their bad actors. Consider LIBOR rate fixing in finance, diesel emissions cheating in automotive, or manipulation of energy markets uncovered during the Enron scandal. There will always be bad actors and bad acts in any industry. What these three examples each prove is that regulators will detect violations and executives will serve jail sentences when convicted of crimes.

I sincerely hope no one is advocating that the W3C ought to take on the role of law makers. What we can do is design standards that help regulators detect bad actors as defined by their regional laws.

Yet the authors of this questionnaire seem opposed to solutions that would make it easier to detect and identify bad acts and bad actors. Instead of supporting open and free markets, the authors advocate an approach that restricts website operator choices and disproportionately impacts their smaller rivals. This impact to competition seems to run afoul at least of the general W3C vision to support independent authors if not also the W3C antitrust guidance that members should not use the W3C processes “to restrict competition.”

I originally raised this issue to fix problems with the Security and Privacy Questionnaire related to restricting website operator's choice and their access to supply chain services needed to operate their business and compete against larger rivals. The multiple antitrust investigations that have since been brought highlight the needs of almost all website operators to rely on supply chains to operate their businesses as well as deep concerns when such choices are impaired and restricted.

The question before us is whether we will continue to create fair and open standards or pursue those that disproportionately benefit a minority of the largest trillion dollar organisations operating an oligopoly.

I therefore have two requests:

  1. @wseltzer to revoke the comment in relation to matters of competition as choice and the disproportionate impact to small website operators are germane to this and other issues.
  2. @torgo to re-open the issue so that we can discuss the changes needed to the document to ensure we do not promote policies and guidelines that disproportionately impact the majority if web participants.

@darobin
Copy link
Member

darobin commented Dec 28, 2020

Lack of consensus is not measured by the volume of rhetorical noise and unsupported assertions in a given discussion.

Let's return to the facts of the case. The TAG questionnaire offers a simple mitigation strategy to improve the security offered by Web standards. As anyone who understands security or Web technology understands, this is a possible mitigation strategy that one may have recourse to if necessary or sensible. It mandates nothing. It forecloses nothing.

All that the questionnaire says is "hey, if you build a feature that presents a threat surface for the user — say for instance a Bluetooth API that could be used to access the user's health information from their fitness device — you should think about making sure that it's available only to the site the user believes they're visiting rather than to a random ad on the page."

This part of the questionnaire is grounded in a simple fact: code brought in by third-party code injection is less manifest to the user and is inherently harder to audit and less safe. In fact, whole attack classes are possible because of third-party injection. This is just elementary understanding of how the Web works.

Now, the question is: are there cases in which a third party could be rendered safe enough that a user agent should be encouraged to grant it access to dangerous features? And the answer to that — given more than once and with an excess of patience throughout this thread — is: that would be cool, but you've got to prove it. Right now, it is manifestly not the case that a browser should ever make the decision to endanger the user by granting that kind of access to arbitrary third parties.

The TAG works on facts, not speculation. If and when there is proof that a third party can provably be trusted, say in the form of a Trusty3P API, then that will make the bar for reopening decisions which will be entirely appropriate.

The case made by @jwrosewell is that "All industries have their bad actors" and that "I sincerely hope no one is advocating that the W3C ought to take on the role of law makers. What we can do is design standards that help regulators detect bad actors as defined by their regional laws." While that is certainly an improvement over his previous conspiracy-theory-based insults about "privacy fixing," it is essentially saying that cars shouldn't have safety features because there are laws against drunk driving.

Everything else in this thread is either fanciful speculation or ominous-sounding thinly-veiled (but factually erroneous all the same) implications about foreclosing competition.

So what we have here is a request to make the Web less secure, based on no facts and on technology that does not exist, pushed forward with insulting conspiracy theories. I believe that the TAG and the broader Web community have been very patient here, but until such a time as the slightest shred of evidence can be put forth, I believe this thread should be closed.

@plinss
Copy link
Member

plinss commented Dec 29, 2020

This thread has been closed for nearly 4 months. I don't see the current discourse adding anything that merits re-opening it.

I'm locking this thread so that it stops consuming the TAG's valuable time.

If anyone has anything new to add, start a new issue. Do note however, that such an issue will require new, actionable, information to support it or it will be closed summarily. As @darobin stated, the TAG works on facts, we do not consider unsupported speculation, and consider long-winded irrelevant rhetoric, and other troll-ish behavior, as has been exhibited in this thread, as a denial of service attack.

@w3ctag w3ctag locked as resolved and limited conversation to collaborators Dec 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

9 participants