OFFICIAL
Dear
Ivan
Brecelic
My decision about your complaint |
2025-809681
Thank you for your complaint about Services Australia – Centrelink.
I have considered your complaint and decided not to investigate.
The
Commonwealth Ombudsman generally does not investigate a matter until it
has been raised with the agency first, and the available internal
complaint and/or
review pathways have been completed.
This gives the agency an opportunity to resolve the matter first.
We
find in taking this approach an agency is sometimes able to resolve the
complainant’s concerns directly using its own internal process, which
we encourage. Even when this is not the
case, the internal review process will still result in a record of the
complaint issue and the agency's response, which will allow us to review
the agency’s consideration of the issue and any actions taken by the
agency in response.
From
the information you have provided, you have not lodged a formal
complaint with the agency. This means it is too early for us to be
considering investigating your concerns.
If you still have concerns regarding Services Australia - Centrelink, please find your next steps explained below.
Your next steps
Lodging a complaint
You can find out more about how to lodge your complaint with Centrelink online
here,
or by calling the Centrelink Complaints & Feedback line on 1800 132 468.
Centrelink’s
Feedback & Complaints area will generally respond to complaints
within 10 business days and will attempt to call you on a private
number. If you have not received a response,
we would suggest you follow up in the first instance.
If
you did not receive a response to your complaint or are dissatisfied
with the response provided, you are welcome to return to us with a new
complaint. We can then consider whether Centrelink
has appropriately considered and responded to your complaint.
Review pathway
If
you are unhappy with a decision made by Centrelink, you are open to
request an internal review by a Centrelink Authorised Review Officer (an
ARO review). To request an internal review,
you can call Centrelink on its general number or its complaints line,
visit a service centre or submit an online
review form.
Should you experience a delay in this process, we would suggest you
first lodge a complaint with Centrelink. As explained above, should you
be unhappy with the response provided, or you do not receive a response,
you are welcome to contact us by lodging a
new complaint and we can consider the issue of delay.
If
you are dissatisfied once you have received a decision from Centrelink
regarding a review, it is open to you to lodge an appeal to the
Administrative Review Tribunal (ART) for an independent
review of the decision. While this Office cannot review or change Centrelink’s decisions, the ART has these
powers, and
can remit decisions back to Centrelink to be reconsidered or changed.
Details on how to contact the ART will be on the review decision
letter. For further information about Centrelink’s review process,
please
click here.
Decision
As
it does not appear
you have lodged a
formal complaint, our Office will not be investigating your complaint. I
have finalised my assessment of your complaint and it has now been
closed.
However, if you follow
Centrelink’s process and are dissatisfied with the outcome, you are welcome to contact us to
lodge a new complaint.
We ask that you provide:
a copy (or summary) of your complaint lodged with Centrelink.
a copy (or summary) of Centrelink’s response.
a clear explanation of why you are unhappy with Centrelink’s response.
If
you would like to discuss my decision, please contact me by reply email
or by calling the Commonwealth Ombudsman on 1300 362 072.
Our phone
service is available Monday - Friday 10:00 am to 4:00 pm AEST, or AEDT
except on Wednesdays when our phonelines close at 2pm. We are closed on
national public holidays and some state holidays.
If you disagree with my decision, you can request a review by completing our online form
on
our website.
You
should request a review within 3 months of being told about our
decision. Your review request should clearly identify why you believe
the decision was wrong and provide any additional
supporting information or evidence. You can find additional information
about our review process on our website
here.
Thank you again for bringing your concern to my attention and I trust this information is of assistance.
Your Personal Information
The
Ombudsman’s Office uses personal information we collect from you to
assist us with our handling of your enquiry. Further information about
the way the Ombudsman’s Office handles your
personal information, including how you may access and seek correction
of that information, can be found in the privacy statement available on
our website
www.ombudsman.gov.au.
From
time to time, we ask an external company to conduct surveys of people
who have contacted us so we can collect feedback on ouIf you disagree with my decision, you can request a review by completing our online form on our website.
You should request a review within 3 months of being told about our decision. Your review request should clearly identify why you believe the decision was wrong and provide any additional supporting information or evidence. You can find additional information about our review process on our website here.
Thank you again for bringing your concern to my attention and I trust this information is of assistance.
Your Personal Information
The Ombudsman’s Office uses personal information we collect from you to assist us with our handling of your enquiry. Further information about the way the Ombudsman’s Office handles your personal information, including how you may access and seek correction of that information, can be found in the privacy statement available on our website www.ombudsman.gov.au.
From time to time, we ask an external company to conduct surveys of people who have contacted us so we can collect feedback on our performance. Your personal information, including your contact details, demographic statistics and basic information about your complaint such as when you came to us, how you lodged your complaint, the agency you are complaining about, how long it took us to resolve your complaint, and how we resolved your complaint, may be provided to that external company. If you do not agree to this happening and do not wish to be contacted about your experience using our services, please notify us by replying to this email. If you are unable to reply to this email, contact us on 1300 362 072.
Yours sincerely,
Joshua
Complaints Officer
Commonwealth Ombudsman
Phone: 1300 362 072
Email: ombudsman@ombudsman.gov.au
Website: ombudsman.gov.au
The Office of the Commonwealth Ombudsman acknowledges the traditional owners of country throughout Australia and their continuing connection to land, culture and community. We pay our respects to elders past and present.
COMMONWEALTH OMBUDSMAN - IMPORTANT CONFIDENTIALITY NOTICE
This e-mail message or an attachment to it is confidential, and it is intended to be accessed only by the person or entity to which it is addressed. No use, copying or disclosure (including by further transmission) of this message, an attachment or the content of either is permitted and any use, copying or disclosure may be subject to legal sanctions. This message may contain information which is:
* about an identifiable individual;
* subject to client legal privilege or other privilege; or
* subject to a statutory or other requirement of confidentiality.
If you have received this message in error, please call 1300 362 072 to inform the sender so that future errors can be avoided.
Thank you for your assistance.
Thank you for your response.
Thank you for your feedback.r performance.
Your personal information, including your
contact details, demographic statistics and basic information about
your complaint such as when you came to us, how you lodged your
complaint, the agency you are complaining about, how long it took us to
resolve your complaint, and how we resolved your complaint,
may be provided to that external company. If you do not agree to this
happening and do not wish to be contacted about your experience using
our services, please notify us by replying to this email. If you are
unable to reply to this email, contact us on
1300 362 072.
Yours sincerely,
Joshua
Complaints Officer
Commonwealth Ombudsman
Phone:
1300 362 072
Email:
ombudsman@ombudsman.gov.au
Website:
ombudsman.gov.au

Influencing systemic improvement in public administration
The Office of the Commonwealth Ombudsman acknowledges the traditional
owners of country throughout Australia and their continuing connection
to land, culture and community. We pay our respects to elders past and
present.
------------------------------
---------------------------------------
COMMONWEALTH OMBUDSMAN - IMPORTANT CONFIDENTIALITY NOTICE
This e-mail message or an attachment to it is confidential, and it is
intended to be accessed only by the person or entity to which it is
addressed.
No use, copying or disclosure (including by further transmission) of
this message, an attachment or the content of either is permitted and
any use, copying or disclosure may be subject to legal sanctions. This
message may contain information which is:
* about an identifiable individual;
* subject to client legal privilege or other privilege; or
* subject to a statutory or other requirement of confidentiality.
If you have received this message in error, please call 1300 362 072 to inform the sender so that future errors can be avoided.
---------------------------------------------------------------------
Chromaticities: 0.313, 0.329, 0.64, 0.33, 0.3, 0.6, 0.15, 0.06
Gamma: 0.455
Interlace Type: 0
Software: Microsoft Office Hacker News new | past | comments | ask | show | jobs | submit login
YouTube’s Algorithm Incentivizes the Wrong Behavior (nytimes.com)
203 points by furcyd on June 14, 2019 | hide | past | favorite | 251 comments
strikelaserclaw on June 14, 2019 | next [–]
"If YouTube won’t remove the algorithm, it must, at the very least, make significant changes, and have greater human involvement in the recommendation process.", man does this person know how many videos and how many users YouTube has? They cannot use anything except an algorithm to recommend videos. They cannot use anything except an algorithm to detect videos inappropriate for children. It seems YouTube is working on this, and this opinion seems like a ill thought out fluff piece to enrage readers and sell this persons book.
kartan on June 14, 2019 | parent | next [–]
> They cannot use anything except an algorithm to recommend videos.
I agree that with the current business model it is not possible for YouTube to sort it manually.
When I was a kid, a long long time ago, it would have been impossible to conceive that a TV channel showed that kind of content regularly and continue open. If their answer would have been that they cannot fix it because it costs money there would have been an outraged response.
If YouTube cannot keep things legal, cannot respect people rights, cannot be a good responsible part of society because it is not cost effective for me the way to go is clear. And that is true for YouTube, Facebook or any other business digital or not.
d1zzy on June 14, 2019 | root | parent | next [–]
Youtube is not a TV channel, it's a video crowdsourced sharing site.
If we want to have a "free" (as in no subscription and no money required to be payed for the service) video sharing/uploading site, what model would that make it work and still have human reviewing? I consider the fact that there may be undesirable videos as the cost of having such a site, similarly how to the "cost" of having a free Internet is that there's going to be lots of hate online and free access to tutorials to make bombs and what not. It's part of the deal and I'm happy with that, YMMV. If you worry about what kids might access then don't let them access Youtube but please don't create laws that would make free video sharing sites illegal/impossible to run.
This is true for pretty much any free Internet service that allows for user content. If all of Internet content production will go back to just "official" creators (because they are the only ones where the cost/benefit math would make sense) I think that would be a huge loss/regression over what we have gained since the age of the Internet.
seanmcdirmid on June 15, 2019 | root | parent | prev | next [–]
When I was a kid in the 80s, cartoons were basically 30 minute toy commercials. My toddler loves watching videos on YouTube of Russian kids playing with toys, so I guess things haven’t changed much.
CamperBob2 on June 15, 2019 | root | parent | prev | next [–]
How about actually demonstrating harm to children (or to anyone else) before launching a moral panic?
Is that an option?
shearskill on June 15, 2019 | root | parent | next [–]
I’d say having a 13 year old far right YouTube star post a video threatening to kill the CEO might be harmful, but maybe that’s ok?
https://www.newstatesman.com/science-tech/social-media/2019/...
CamperBob2 on June 15, 2019 | root | parent | next [–]
Do you seriously think that kid was radicalized on YouTube? Where were the parents?
shearskill on June 15, 2019 | root | parent | next [–]
2018: “I’ll pick a topic and just give my opinion about it try to be entertaining, try to be funny, try to be unique and say something other people haven’t said before,” youtuber said.
https://redwoodbark.org/46876/culture/redwood-students-view-...
2019:
In response, the principal of the high school sent a note to students and parents Thursday night regarding the "hate-based video and text posts attributed to one of our students":
https://www.kron4.com/news/bay-area/bay-area-girl-says-she-l...
robbrown451 on June 15, 2019 | parent | prev | next [–]
I would think having humans more involved in training the algorithm could scale much better.
Also, detecting videos that are inappropriate for children is a lot harder than determining certain content creators that are trustworthy to post videos that are appropriate (and to tag them correctly). That can be learned from the user's history, how many times their stuff has been flagged, getting upvotes from users that are themselves deemed credible, and so on. The more layers of indirection, the better, a la PageRank.
So even without analyzing the video itself, it would have a much smaller set of videos it can recommend from, but still potentially millions of videos. You still need some level of staff to train the algorithm, but you don't have to have paid staff look at every single video to have a good set of videos it can recommend. The staff might spend most of their time looking at videos that are anomalous, such as they were posted by a user the algorithm trusted but then flagged by a user that the algorithm considered credible. Then they would tag that video with some rich information that will help the algorithm in the future, beyond just removing that video or reducing the trust of the poster or the credibility of the flagger.
PaulAJ on June 15, 2019 | root | parent | next [–]
The trouble with depending on user flags is that it creates opportunities for blackmail.
https://www.theverge.com/2019/2/11/18220032/youtube-copystri...
ehsankia on June 15, 2019 | root | parent | prev | next [–]
The algorithm works really damn well for 99.999% of the cases. It manages to show me great recommendations from very niche things I'm interested in. But it's the very same behavior that can, in some cases, lead to issues.
nraynaud on June 15, 2019 | root | parent | next [–]
To me it always pulls me towards television or Hollywood garbage. And videos I have already watched, hundreds of them.
jotm on June 15, 2019 | root | parent | next [–]
You should check if personalized recommendations are disabled. Google has a history of disabling/enabling settings without telling me.
Barrin92 on June 15, 2019 | root | parent | prev | next [–]
are you sure that it's not you who knows very well how to curate their own content and who to subscribe to rather than the recommendation system?
I'm not sure heavy automation is needed here, people jump from content creator to content creator by word of mouth. In contrast most algorithmic suggestions to me seem highly biased towards what is popular in general. I click on one wrong video in a news article and for the next two days my recommendations are pop music, Jimmy Kimmel, Ben Shapiro and animal videos
ehsankia on June 15, 2019 | root | parent | next [–]
Not for me, for example I've been watching a few PyCon and I/O talks, and it's been showing me other interesting PyCon talks that are highly ranked. It's also giving me good AutoChess and OneHourOneLife Let'sPlays, both of which I've been very interested in lately.
All three things I just mentioned are fairly niche, comparatively, yet it knows that I've been watching a lot of them lately and is giving me more of it.
robbrown451 on June 15, 2019 | root | parent | prev | next [–]
I'm reminded of how Google images had an issue where dark skinned people sometimes turned up in a search for gorilla. 99.9% of the time, the image recognition algorithm did really well, but here was a case where an error was really offensive. What was (probably) needed was for there to be a human that comes in and, not tag every gorilla image, but simply to give it some extra training around dark skinned humans and gorillas, or otherwise tweak some things specific to that sort of case, so the chance of it happening was reduced to nearly nothing.
There are probably a ton of situations like that in YouTube, where certain kinds of mistakes are hardly noticed (it shows you a video you weren't remotely interested in), but others can be really bad and need special training to avoid (such as where it shows violent or sexual content to someone who likes nursery rhymes and Peppa Pig).
miemo on June 15, 2019 | root | parent | next [4 more]
andrewvc on June 14, 2019 | parent | prev | next [–]
Maybe they can't make editorial recommendations for the long tail but they absokutely could do so for the top few thousand videos each week.
Would that yield an improvement? I don't know, but it would have an impact.
scj on June 14, 2019 | root | parent | next [–]
I'm kind of wondering if a "Ned Flanders" user-detector is possible.
Search for users who stop videos at "offensive" moments, then evaluate their habits. It wouldn't be foolproof, but the "Flanders rating" of a video might be a starting metric.
Before putting something on YouTube for kids, run it by Flanders users first. If Flanders users en masse watch it the whole way through, it's probably safe. If they stop it at random points, it may be safe (this is where manual filtering might be desirable, even if it is just to evaluate Flanders Users rather than the video). But if they stop videos at about the same time, that should be treated as a red flag.
Of course, people have contextual viewing habits that aren't captured (I hope). Most relevantly, they probably watch different things depending on who is in the room. This is likely the highest vector for false positives.
The big negative is showing people content they obviously don't want for the sake of collecting imperfect data.
gus_massa on June 14, 2019 | root | parent | next [–]
Should we filter all the pro-choice videos or the pro-life videos?
Should we filter all the Santa-is-fake videos or the Santa-is-real videos?
Do you agree with Flanders?
undersuit on June 15, 2019 | root | parent | next [–]
Maybe Youtube and their revenue sources agree with him.
scj on June 14, 2019 | root | parent | prev | next [6 more]
Nasrudith on June 14, 2019 | root | parent | prev | next [–]
The question I have is how can they tell "Flanders" viewers from "bored" ones or "out of time" ones short of them flagging it without a lot of manual review and guess work?
Reviewing viewers on that level sounds even more intensive than filtering every channel and video.
scj on June 14, 2019 | root | parent | next [–]
In the system I've proposed, if there are enough test-Flanders thrown at the content the times closed should be different enough to trigger an unclear Flanders rating. This would indicate some other metric should be used.
I don't see this test working in isolation. Given it's nature, it's value is in obscure rejection statements rather than acceptance (or "okilly-dokillies" in this case).
To echo what others on this thread have said, there's a lot of content on Youtube. This means that even if they are cautious about which content passes through the filter for kids, there's still a lot available.
Mirioron on June 14, 2019 | root | parent | prev | next [–]
The problem is that just a few examples of the algorithm getting it wrong is enough to cause an adpocalypse. If millions of videos are uploaded every month then you can imagine how low the error rate has to be.
scj on June 14, 2019 | root | parent | next [–]
If Google takes the impractical route and hires a sufficient number of multilingual Ned Flanders, then they're still probably going to have a non-zero false positive rate (humans make mistakes too).
Whatever they do is going to have to be evaluated in terms of best effort / sincerity.
Semi-related: The fun of Youtube is when the recommendation algo gets it right and shows you something great you wouldn't have searched for. The value is that it can detect elements that would be near impossible for a human to specify. But that means it has to take risks.
gibrown on June 14, 2019 | parent | prev | next [–]
The total number of videos really doesn't matter, it is the total number of creators, which at least this site claims is a total of 50m for all time: https://mediakix.com/blog/youtuber-statistics-content-creato... (first result I found)
Just start banning certain creators from showing up in recommendations if their content crosses the line. Not that hard if you are willing to do it.
cortesoft on June 14, 2019 | root | parent | next [–]
But how would that solve the problem that the article opened with? There is nothing wrong with the videos of children playing, the wrong part was recommending them to pedophiles
gibrown on June 14, 2019 | root | parent | next [–]
Feels like the article was about more than that one issue. It also discussed creators slicing in frames of mickey mouse and other methods of gaming the alg. Most of the responses here seem to be buying into Google's hype around number of hours or videos uploaded per second. I think that is a distraction that lets them off the hook for not managing the community they built.
Every algorithm is an editorial decision.
undersuit on June 15, 2019 | root | parent | prev | next [–]
No, the wrong part was when the pedophiles made inappropriate comments on the videos.
Buge on June 15, 2019 | root | parent | next [–]
If that's the problem, then gibrown's solution
>Just start banning certain creators from showing up in recommendations if their content crosses the line.
also won't help, because it's not the creators that have content crossing the line, it's the commenters.
pythonwutang on June 14, 2019 | parent | prev | next [–]
> They cannot use anything except an algorithm to recommend videos
That’s assuming recommendations need to be personalized. They could recommend at a higher level to groups of people using attributes like age range or region.
I’m not a fan of their personalized recommendations. It’s algorithm overfits my views to recommend videos extremely similar to videos I’ve recently watched, which isn’t really aligned with my interests.
If they took a completely different approach (not personalized) it could really impact the UX in a positive way.
icebraining on June 15, 2019 | root | parent | next [–]
No thanks. You try logging out and see the generic recommendations. It's the lowest common denominator, just like anything else targeted at large masses of people.
moomin on June 14, 2019 | parent | prev | next [–]
You are 100% not thinking big enough. These algorithms identify clusters. These clusters can be examined through random sampling. It doesn’t take a genius to spot that a cluster that involves children and pornography might have some problems.
Of course, the system doesn’t expose these kinds of outputs, because no-one has any interest in designing such a system and taking responsibility for the content.
jgalt212 on June 14, 2019 | parent | prev | next [–]
> man does this person know how many videos and how many users YouTube has
While that might be true, 99% of the views are a very small subset of the videos posted. It's completely doable, or at the very least the problem can be greatly mitigated by putting more humans into the process and not letting the algos recommend videos that haven't been viewed by someone in Youtube's equivalent of "standards and practices". All that being said, I fear the primary reason this is not done is because such actions would reduce the number of hours of viewed videos and ad revenues. In fact, I've read articles supporting this theory.
Google under Pichai is basically like Exxon under Lee Raymond--solely focused on revenue growth and completely blind to any number that doesn't show up on the current and next quarter's income statement.
sharcerer on June 15, 2019 | root | parent | next [–]
pichai doesn't come off as enthusiastic. I am a heavy Google product user. Watch all the hardware, I/O events etc, I have seen him use the same sentences multiple times over the past 2 years across events. I get that he won't exude the same charm, excitement as a founder-CEO, nevertheless a lot is left to be desired. A lot of his public statements feel like carefully crafted PR responses. Nothing wrong with crafted responses. When you are a 800 Billion$ company, you gotta be careful, but at least try to give off the perception of being authentic. Google is really bad at the perception game. Apple's really good at that. But I have a strong dislike for stupid moves, even more so than bad moves and Google has made lots of those stupid ones.
scarface74 on June 14, 2019 | parent | prev | next [–]
Just to add on, a Youtube executive was recently on a podcast and she said there are 500 videos uploaded per second.
v7p1Qbt1im on June 18, 2019 | root | parent | next [–]
Probably Neal Mohan on Recode right? The current public number is 500 hours per minute. But that number has been floating around for a while. It's probably higher now.
RugnirViking on June 14, 2019 | root | parent | prev | next [–]
thats.... actually shockingly few
nostrademons on June 14, 2019 | root | parent | next [–]
The stat I heard while at Google (~5 years ago) was that 8 hours of video is uploaded every second. Cross-checking that against the 500 videos/sec figure, it implies that the average video is about 1 minute. I suspect the 8 hours figure is pretty out-of-date now, and it's more like 20 hours/sec.
BTW, you could do some simple math to figure out how many employees it'd take to have a human watch every video that comes in. 3600 secs/hour * 20 hours of video/sec = 72000 secs/video/sec, * 3 to assume 8 hour shifts = 216,000 employees, * $30K/year = $6.4B/year. It's theoretically doable, but you wouldn't get the product for free anymore.
seretogis on June 14, 2019 | root | parent | next [–]
$30k/year seems high. This is the sort of work that would be likely outsourced, perhaps to the Philippines for less than $10k/year per person.
$2B is still nothing to sneeze at, but it's less than Microsoft paid for Minecraft.
nostrademons on June 14, 2019 | root | parent | next [–]
$30K/year is minimum wage in Sunnyvale and Mountain View, where Google headquarters is.
YouTube could probably outsource it internationally, but that'd just spark a new round of outrage: "Why are global community standards set by an American technology company outsourced to poor workers in the Philippines? Are these the people we want deciding our values?"
taxidump on June 14, 2019 | root | parent | next [–]
This is probably not the thought process this issue would travel down. Costs are typically the first consideration in a semi-skilled position if native english sounding isn't a requirement.
deanCommie on June 15, 2019 | root | parent | prev | next [–]
Because you'd be able to get humans with higher intelligence and better judgement for 10k/year in the Philippeans, than at minimum wage in the US.
mc32 on June 15, 2019 | root | parent | prev | next [–]
They already outsource their moderation to mostly the Philippines so there’d be no change.
scarface74 on June 14, 2019 | root | parent | prev | next [–]
Considering that rumors are that YouTube is still barely above break even, that is a lot.
icelancer on June 15, 2019 | root | parent | prev | next [–]
>> $2B is still nothing to sneeze at, but it's less than Microsoft paid for Minecraft.
One is an investment/one time purchase and the other is a long-term annual liability, slated to grow.
gowld on June 14, 2019 | root | parent | prev | next [–]
A billion videos per year is shockingly view?
razius on June 15, 2019 | parent | prev | next [–]
They don't care, they want to push them into approved content rather than recommended content. Aka "these are the topics that you are allowed to speak of".
See current Pinterest scandal and banning from Youtube of any video mentioning this.
mtgx on June 14, 2019 | parent | prev | next [–]
All true. But all of this is making me wonder - what are the people thinking who say they can't wait for our society to be run by AI? The apex of AI capability can't even recommend videos properly right now, and we want it to run all the aspects of our society?! No, thanks.
icebraining on June 15, 2019 | root | parent | next [–]
What those people actually mean is "I can't wait for AI to be so good that it'll be obvious that it should run all the aspects of our society". The current state is irrelevant, nobody wants to put those in charge.
jodrellblank on June 15, 2019 | root | parent | prev | next [–]
What are the people thinking who say they can't wait for our society to be run by humans? The most common state of human government capability can't even put human suffering before numbers in a virtual bank account, can't prioritise truth over falsehood, can't restrain themselves from bribery, can't reliably turn up to hearings or ballots, can't organise projects and complete them, can't compromise when millions of people depend on it. We want to dismiss alternatives which haven't even been developed yet for not being good enough?
v7p1Qbt1im on June 18, 2019 | root | parent | prev | next [–]
The argument is that a hypothetical benevolent ASI can't be corrupted like literally all humans can. Those people are likely referring to AI's as they appear in Ian Banks The Culture series.
blueboo on June 14, 2019 | parent | prev | next [–]
Such an effort would cost literally millions of dollars and surely sink this fledgling startup
ggggtez on June 14, 2019 | root | parent | next [–]
I don't think sarcasm with no substance behind it is very insightful.
Humans are involved in the process. To suggest otherwise is to be willfully ignorant.
rosterface on June 14, 2019 | prev | next [–]
Ah, the classic “think of the children!” argument. It is no one’s responsibility other than the parent to ensure their child isn’t watching inappropriate content (which will be different for every family and individual).
This article suggests that machine learning and collaborative filtering are incapable of producing healthy recommendations. I beg to differ, the New York Times may not like the result but they work for the vast majority of users on any service with too much content to manually curate.
Miner49er on June 14, 2019 | parent | next [–]
I don't think that's the point. It is false advertising for YouTube to create YouTube Kids for kids, and then not have content that is appropriate for kids on it.
FabHK on June 14, 2019 | parent | prev | next [–]
> This article suggests that machine learning and collaborative filtering are incapable of producing healthy recommendations. I beg to differ,
The article cites actual instances and recurring problems showing that "machine learning and collaborative filtering are incapable of producing healthy recommendations.": Even when YouTube tried to produce child friendly content, they failed. You can't just say "it's fine" after the article shows it not being fine.
AbrahamParangi on June 14, 2019 | parent | prev | next [–]
Setting aside the personal responsibility angle for the moment (which I agree with you on!) don't you think that negative externalities should generally be managed?
YouTube is a paperclip maximizer (where paperclips correspond to eyeball-hours spent watching YouTube) and at some point optimizing paperclips becomes orthogonal to human existence, and then anticorrelated with it.
I think it's a perfectly fair thing to say that maybe the negatives outweigh the positives at the present.
(This argument doesn't apply solely to YouTube, of course)
bengotow on June 14, 2019 | root | parent | next [–]
I generally agree with you, but I think YouTube being safe for kids became their problem when they launched a version specifically for kids and marketed it as safe.
RHSeeger on June 14, 2019 | parent | prev | next [–]
> It is no one’s responsibility other than the parent to ensure their child isn’t watching inappropriate content
Society has had laws in place to prevent children from viewing things they should not be (inappropriate movies, magazines, etc).
scarface74 on June 14, 2019 | root | parent | next [–]
What law is there to prevent a kid from going on the internet and going to “inappropriate” sites? Watching video on cable? Finding their Dad’s Playboy magazine back in the day?
Fins on June 14, 2019 | root | parent | next [–]
On cable there are ways to lock out channels, setting ratings on the TV and all that. If dad doesn't hide his Playboy well enough, it's obviously on him to fix it.
On the internet it is much more difficult, of course, and we can't realistically expect some shady offshore site from implementing age checks, let alone recommendation algorithms. But Google is a public, respected company from a first world country that claims to be promoting social good (which, of course, is marketing BS, and even if it weren't I would not want their idea of social good, but still). You'd think that they would invest some effort into not showing inappropriate content to kids at least. But no, they throw up their hands and go on ideological witch hunts instead.
scarface74 on June 14, 2019 | root | parent | next [–]
I’ve got an idea - don’t let your kids get on YouTube and only allow them to get on curated sites. You can easily lock down a mobile device to only allow certain apps/curated websites.
Fins on June 15, 2019 | root | parent | next [–]
I don't let mine anywhere near a TV or computer. Of course that might be a bit more difficult once tghey get old enough to actually reach the keyboard...
But then I try to not let my mom on YouTube either. Or myself, for that matter.
v7p1Qbt1im on June 18, 2019 | root | parent | prev | next [–]
lol, do you even children. They will always find a way. You can restrict apps and services all you want. How about their friends at school? Are you going to restrict their phones as well? The only thing that works is actually talking to the kids about things they've seen/experienced. Not saying that is easy of course.
Nasrudith on June 14, 2019 | root | parent | prev | next [–]
No we don't - not in the US. Short of literal pornography that could fall afoul of corruption of a minor the state isn't involved. That is just from ratings cartels and pressure groups.
If nobody gives a fuck enough to affect business you can give the complete SAW series to 3 year olds and all the offended can do is yelp indignantly.
manfredo on June 14, 2019 | root | parent | prev | next [–]
Nope. This only applies to pornography if I recall correctly. There's not laws against showing R rates movies to kids, it's just the theaters that refuse to admit them. In 2011 the courts struck down a California law prohibiting selling I'd M rates games to minors, too.
taeric on June 14, 2019 | parent | prev | next [–]
This implies there is not a society benefit from healthy options.
The parents are the most well placed to know at an individual level. But responsibility is a cop out, if you are just dropping it on someone.
Granted, I agree it is a hard problem. Not even sure it is solvable. :(
rspeer on June 14, 2019 | parent | prev | next [–]
There are healthy recommender systems, like Spotify.
YouTube is a _disastrously_ unhealthy recommender system, and they've let it go completely out of control.
throw20102010 on June 14, 2019 | root | parent | next [–]
Spotify's recommendation system is dealing mostly with artists that have recording contracts and professional production- their problem shouldn't be compared to YouTube's which has to deal with a mix of professional, semi-pro, and amateur created content. Also there's more of a "freshness" aspect to a lot of YT videos that isn't quite the same as what Spotify has to deal with (pop songs are usually good for a few months, but many vlogs can be stale after a week). Not only that, but many channels have a mix of content, some that goes stale quickly and some that is still relevant after many months- how does a recommendation engine figure that out?
It's better to compare Spotify's recommendations to Netflix's recommendations, which also deals with mostly professional content. Those two systems have comparable performance in my opinion.
slg on June 14, 2019 | root | parent | next [–]
Why the content exists is also important. People create video specifically for Youtube. Very few people create music just to host it on Spotify. This results in the the recommendation algorithm and all its quirks have a much bigger impact on the content of Youtube than Spotify. Also having that many people actively trying to game the recommendation algorithm can pervert that algorithm. That simply isn't a problem for sites like Spotify or Netflix.
jasode on June 14, 2019 | root | parent | prev | next [–]
>YouTube is a _disastrously_ unhealthy recommender system,
Can you explain with more details?
I use Youtube as a crowdsourced "MOOC"[0] and the algorithms usually recommended excellent followup videos for most topics.
(On the other hand, their attempt at matching "relevant" advertising to the video is often terrible. (E.g. Sephora makeup videos for women shown to male-dominated audience of audiophile gear.) Leaving aside the weird ads, the algorithm works very well for educational vids that interests me.)
[0] https://en.wikipedia.org/wiki/Massive_open_online_course
ilikehurdles on June 14, 2019 | root | parent | next [–]
Yes. Elsagate is an example - the creepy computer-generated violent and disturbing videos that eventually follow children's content - or the fact that just about every gaming-related video has a recommendation for an far-right rant against feminism or a Ben Shapiro screaming segment. There's also the Amazon problem - where everything related to the thing you watched once out of curiosity follows you everywhere around the site.
jasode on June 14, 2019 | root | parent | next [–]
>Elsagate is an example,
Yes, I was aware of Elsagate.[0] I don't play games so didn't realize every gaming video ends up with unwanted far-right and Ben Shapiro videos.
I guess I should have clarified my question. I thought gp's "unhealthy" meant Youtube's algorithm was bad for somebody like me that views mainstream non-controversial videos. (Analogy might be gp (rspeer) warning me that abestos and lead paint is actually cancerous but public doesn't know it.)
[0] https://news.ycombinator.com/item?id=20090157
undefined1 on June 14, 2019 | root | parent | next [–]
> I don't play games so didn't realize every gaming video ends up with unwanted far-right and Ben Shapiro videos.
They don't. That's confirmation bias at work.
smt88 on June 14, 2019 | root | parent | next [–]
It's not 100%, but I'd consider "video games" => "Ben Shapiro" to be a pretty awful recommendation system, regardless of the reasoning behind it. As far as I know, the group "video gamers" doesn't have a political lean in either direction.
I've definitely seen this with comics. I watched a few videos criticizing Avengers: Infinity War, and now I see mostly Ben Shapiro recs. It makes no sense. I never have (and never plan to) seek out political content on YouTube.
nostrademons on June 14, 2019 | root | parent | prev | next [–]
I watch a number of gaming videos and have never had a far-right video recommended. Don't know who Ben Shapiro is.
It could be the type of games involved, since I usually watch strategy, 4x, city-building, and military sims. I usually get history-channel documentaries or "here's how urban planning works in the real world" videos recommended, which suits me fine. Somebody whose gaming preferences involve killing Nazis in a WW2-era FPS might be more likely to get videos that have neo-Nazis suggesting we kill people.
ilikehurdles on June 14, 2019 | root | parent | prev | next [–]
Some of the child comments of your thread mention the nazi problem.
jasode on June 14, 2019 | root | parent | next [–]
But that child comment didn't link Nazis to normal "video games". I assumed he just meant some folks (e.g. "1.8%" of web surfers) with the predilection for far-right videos would get more Nazi recommendations. Well yes, I would have expected the algorithm to feed more of what they seemed to like.
I do not see any Nazi far-right videos in 1.8% of my recommendations ever.
sorenn111 on June 14, 2019 | root | parent | prev | next [–]
Isn't that an inevitable side effect of collaborative filtering? If companies could do content based-recommendation, wouldn't they? Until purely content based recommendations are possible, wisdom of the crowds via collaborative filtering will lump together videos that are about different things but watched by similar viewers.
posterboy on June 14, 2019 | root | parent | prev | next [–]
Spotify simply does not have the content over which an algorithm could loose control.
ariwilson on June 14, 2019 | root | parent | prev | next [–]
Spotify has 40M tracks total. On YouTube, more than 5B videos are watched by users every day. Different scales of problem demand different solutions.
amphibian87 on June 14, 2019 | root | parent | next [–]
I don't know what the comment you are replying to meant, I interpreted it to mean the algo takes you down a rabbit hole to darker content, however for me I miss the days when it actually recommended relevant videos, similar to the one I was watching.
My entire sidebar is now just a random assortment of irrelevant interests. For instance I wanted to learn to play a denser piano chord, I learned it ages ago but I still get like 20 videos that explain how to add extensions to a 7 chord, even if I'm watching a video on the F-35 fighter pilot.
restingrobot on June 14, 2019 | root | parent | prev | next [–]
I completely disagree, my children have a wonderful time following the recommended videos that youtube provides. I'm interested to hear your reasoning on why it is "disastrous".
ihuman on June 14, 2019 | root | parent | prev | next [–]
How is Spotify's different from Youtube?
notriddle on June 14, 2019 | root | parent | next [–]
I'm pretty sure all content on Spotify gets manually curated first, so abusive tagging doesn't happen, and some of the worst content simply doesn't get uploaded at all. Spotify also doesn't try to be a news site, so they can afford to have a couple week's lag between uploading a song and having it show up in people's recommendation feed.
anticensor on June 14, 2019 | root | parent | prev | next [–]
More selective recommendation, all-subscriber environment.
aukust on June 14, 2019 | root | parent | prev | next [–]
I disagree in some sense. I personally have found the recommending system on YouTube pretty good for the main page of the site. The thing that bugs me is the recommended bar right (or bottom right) of the videos, which can be really annoying and infested with clickbait etc.
avip on June 14, 2019 | parent | prev | next [–]
It's easier, and more profitable, to write a book than confront your kids about screen time.
la_barba on June 14, 2019 | parent | prev | next [–]
I want to place a kid in front of a screen, press a button and walk away. How am I supposed to do that now?
oh_sigh on June 14, 2019 | parent | prev | next [–]
What about when youtube marketed a specific product for children, but then it turned out they were letting really, really weird stuff in there.
sneakernets on June 14, 2019 | parent | prev | next [–]
>It is no one’s responsibility other than the parent
Yes, but you _must_ understand that most (no, ALL) of the millennial generation grew up with public content over the airwaves that was curated and had to pass certain guidelines. So many parents think that the YouTube Kids app is the same thing. it's not!
If YouTube want to be the next Television, they're going to have to assume the responsibilities and expectations surrounding the appliances they intend to replace. Pulling a Pontius Pilate and tossing the issue to another algorithm to fail at figuring out is not going to fix the problem.
Thankfully, there's much more out there than YouTube when it comes to children's entertainment, actually curated by human beings with eyeballs and brains, and not algorithms. The problem is that parents don't know these apps even exist, because YouTube has that much of a foothold as "place to see things that shut my kid up, so I can see straight."
restingrobot on June 14, 2019 | prev | next [–]
I don't think this is incentivizing bad behavior. It's merely showing the viewer more of what they are already watching with a gradual introduction to broader material. The example of a youtube serving content to "pedophiles" is borderline asinine. The neural network is just making suggestions on viewing, it's not telling people to watch. In regards to the complaint that "adult" content is being served to adolescents, there is an option to filter out sensitive content all together.
Also, as a parent to 4 children myself, the idea of letting my kids loose on the internet completely devoid of any supervision is ridiculous. When did it become youtube's responsibility to parent the children in its audience? Should we also ban HBO, Amazon, and Netflix from providing recommendations because it might be a child in front of the screen?
This is just another pointed attempt to censor free speech via the abuse of technology companies. The idea being that the platform will be restrictive if they are constantly badgered about it.
masklinn on June 14, 2019 | parent | next [–]
> with a gradual introduction to broader material.
It doesn't gradually introduce broader material, it gradually introduces more "engaging" material.
restingrobot on June 14, 2019 | root | parent | next [–]
I would argue that your point is semantics, but even so you still have a choice of whether or not to watch the recommended more "engaging" material. It doesn't change the overall point of my statement.
tvanantwerp on June 14, 2019 | root | parent | next [–]
I'd say it's quite a different point. My own experience has been that the recommended "engaging" material is something in the same genre as whatever I just saw, but with a clickbaitier title, flashier thumbnail, and overall poorer informational quality. It's the different between saying "I see you enjoy sandwiches, maybe you would also enjoy salads or a plate of sushi" and "I see you enjoy sandwiches--here's a candy bar, an off-brand soda made with high-fructose corn syrup, and a carton of cheap grocery store ice cream."
restingrobot on June 14, 2019 | root | parent | next [–]
The semantics argument I was pointing out was in regards to "broader" vs "engaging". That's not what my statement was about, it was that no matter what the algorithm recommends to you, you still have the choice whether or not to watch it. The point you are making is purely anecdotal as I assure you the neural network is not simply showing you
>same genre as whatever I just saw, but with a clickbaitier title, flashier thumbnail, and overall poorer informational quality
Faark on June 15, 2019 | root | parent | next [–]
You can keep telling yourself that you have a "choice", but in the end we all are just humans, with quite predictable behavior. Bias selection of content is since forever one of the more effective ways of shaping opinion. Politics is fighting hard on that front for a reason. For the first time ever are some very few algorithms selecting content for millions of people, with apparently little human oversight. Yes, this should worry us. Simply assuming the results of those will benefit mankind, especially in the long term, would be foolish. It's not quite exactly like the usual ai safety paperclip scenario, but by now it should be very obvious that optimizing watch-time, even with current "ai", comes with significant unintended side effects / drawbacks.
hrktb on June 14, 2019 | parent | prev | next [–]
> just making suggestions on viewing, it’s not telling people to watch
I’m not sure I get the difference between suggesting content and telling people what content to watch. Were you trying to drive a different point ?
That aside, it seems your argument is that youtube being neutral in recommending videos shelters them from blame, while the article is basically about why being neutral is harmful.
I personaly think anything dealing with human content can’t be left neutral, as we need a bias towards positivity. Just as we don’t allow generic tools to kill and save people in the same proportion, we want a clear net positive.
restingrobot on June 14, 2019 | root | parent | next [–]
To make my first point clear, here is a scenario:
I walk up to you on the street and suggest you give me a dollar.
vs
I walk up to you on the street and take a dollar from you by force.
Youtube is a platform, in order remain a platform it MUST remain neutral. You cannot have an open forum with bias. There are certain mutually agreed upon rules, (no nudity, extreme violence, etc.), those limitations are more than enough to handle the vast majority of "negative" content.
I whole heartedly disagree that we need a bias towards positivity. Who determines what that definition is? Something you see as negative, I might happen to enjoy. If Youtube begins to censor itself in that way it is no longer a platform and is now responsible for ALL of its content.
hrktb on June 14, 2019 | root | parent | next [–]
Thanks for the clarification on the first point. Won’t youtube effectively shove the next recommended video to a user as long as auto-play is activated ?
Also they are the default view, I’d argue suggestions are a lot more than just “suggestions”. It would be akin to a restaurant “suggesting” their menu, and you’d need to interrogate the waiter to explore what else you could be served. For most people the menu is effectively the representation of the food of the restaurant.
For the neutrality, if you recognize there are agreed upon rules, as you point out, the next question becomes who agreed on these rules, and who made them ?
Who agreed nudity should be banned ? Which country ? What nudity ? and art ? and educational content ? and documentaries ? at which point does it become nudity ? The more we dig into it, the more it becomes fuzzy, everyone’s boundary is different, and all the rules are like that.
Any rule in place is positive to a group and negative to another, for a rule to stay in place it needs to have more supporters than detractors, or put it another way have more positive impact than negative ones.
The current set of rules are the ones that were deemed worthwile, I think it’s healthy to chalenge them or to push for other rules that could garner enough agreement to stay in place.
restingrobot on June 14, 2019 | root | parent | next [–]
> Won’t youtube effectively shove the next recommended video to a user as long as auto-play is activated ?
You can very easily turn auto-play off. There is plenty of opportunity to switch videos. It would be different if youtube forced you to watch the next video in order to use the site.
>For the neutrality, if you recognize there are agreed upon rules, as you point out, the next question becomes who agreed on these rules, and who made them ?
Youtube made them. Those are pre-conditions for uploading videos. They don't have to have any reason why they made them, those are conditions that must be met in order to upload a video. So by uploading a video you are agreeing to them.
>Any rule in place is positive to a group and negative to another
I don't agree with this generality. However, this discussion is not about the legitamacy of the rules to use youtube, it is whether or not youtube should censor videos, (that meets basic rules of use). My opinion is no, your's as you stated above was:
>I personaly think anything dealing with human content can’t be left neutral, as we need a bias towards positivity.
I agree with you that Youtube should routinely challenge their own rule sets. That is not the same as censoring their content, or in this case modifying their recommendation algorithm.
SomeOldThrow on June 15, 2019 | parent | prev | next [–]
The broader material is the problem. It’s not a natural way of using recommendations: it’s just an ad at that point.
la_barba on June 14, 2019 | prev | next [–]
I think YouTube has just exposed the kind of content people were already interested in, and possibly consuming outside of the public eye. We find it frightening that people readily click on abhorrent content. When they probably were doing it over other platforms earlier. The internet had gore videos for the longest time. I remember a shotgun suicide video that kids in my school used to shock each other with. If Google as a private company chooses to ban content, than that is their right, but an apriori expectation that an entertainment platform should control peoples social behavior and enforce morality is harmful in a free society IMHO.
zanny on June 14, 2019 | parent | next [–]
People were fueling industries of creatively bankrupt content well before the Internet came around, just look at the long term existence of tabloids.
Youtube is optimizing for the underlying psychological mechanisms that put people in that mood because it makes them suggestive and because none of this stuff has substance or meaning they can graze on it like how junk food producers want to promote.
FabHK on June 14, 2019 | root | parent | next [–]
I think the analogy to junk food is instructive. Both fast food and YouTube maximise revenue while minimising costs by exploiting human flaws and foibles, and do so much more effectively than was possible 100 years ago. It is creating an environment that is radically different than the one we evolved in.
Watching hours of YouTube - obesity of the mind. Kind of.
la_barba on June 14, 2019 | root | parent | prev | next [–]
>Youtube is optimizing for the underlying psychological mechanisms that put people in that mood because it makes them suggestive and because none of this stuff has substance or meaning they can graze on it like how junk food producers want to promote.
Well, YouTube (or any advertising platform) also wants people clicking on ads and actually buying things, not just graze. AFAIK they already demonetize content that is not advertiser friendly, and thus de-prioritize it. Busy professionals with limited free time are your best bet for people with a lot of disposable income. If anything YouTube optimizes for content that is safe-for-work, and will quickly lead to you opening your wallet. But yes, I think this is a large scale multi-variate problem, and individual simple metrics don't cut it.
KoenDG on June 14, 2019 | prev | next [–]
I doubt this person does not care about the subject they wrote about.
And if the algorithm is producing negative side effects, then, of course, it should be looked at and changed.
I'm no expert myself, but to my understanding: any algorithm is limited by its data set.
Based on its data set, an algorithm comes to conclusions. But one can then, of course, ask: what's the basis for these conclusions?
I recall reading that a certain AI had been fooled into thinking a picture of a banana was showing a toaster or a helicopter, after a few part of the image were changed to contain tiny bits of those items.
It turned out that the AI used the apparent texture on places in the image to determine what was on the image, rather than doing a shape comparison.
Which sounds like a time-saving measure. Though it may very well have been the method that most consistently produced correct results, for the given dataset.
Frankly, the attitude of "we don't know how it works and we don't care" cannot possibly end well.
Neither the attitude "oh well make a better dataset then".
I get that we're all excited about the amazing abilities we're seeing here, but that doesn't mean we shouldn't look where we're going.
I recall a story of an AI researcher who didn't want to define anything because he was afraid of introducing bias. Upon hearing this, his colleague covered up his eyes. When asked why he did this, he replied: "The world no longer exists". And the other understood.
Because of course the world still exists. And just the same way: it's impossible get rid of bias.
Some human intervention is needed. Just like constant checks and comparison against human results.
dbt00 on June 14, 2019 | parent | next [–]
The problem of the dataset is not just that AI will pick shortcuts and naive heuristics, because humans will too.
The problem of the dataset is that you're not in control of who populates the dataset and what their intentions are. There's no understanding of an adversarial model and threat handling.
michaelbuckbee on June 14, 2019 | prev | next [–]
The NYTimes uses a very human "algorithm" to determine what to report on and if you look at the comparison of causes of death to what's reported it's wildly off:
Data: https://ourworldindata.org/uploads/2019/05/Causes-of-death-i...
This isn't a knock against the NYTimes so much as it is of humanity, we're all fascinated by the lurid and sensational (note that the Google searches are similarly off) and this permeates all levels of life.
vgetr on June 14, 2019 | prev | next [–]
I feel like things were mostly fine until the 2016 election, after which journalists became _very_ concerned. If I had a nickel for each, “The algorithms are coming! The algorithms are coming!”, I’d be rich. I mean, I didn’t like the outcome either, but these types of articles seem to motivated by a) finding a scapegoat and b) wanting to use “algorithm” in a sentence.
bjt2n3904 on June 14, 2019 | prev | next [–]
What a pleasant way of stating that humans are basically good. We just keep passing the buck. "We'd be fine if it weren't for this algorithm!"
We believe that man is essentially good.
It’s only his behavior that lets him down.
This is the fault of society.
Society is the fault of conditions.
Conditions are the fault of society.
If you ask me, "YouTube's algorithm" is simply exposing the way humanity is. And trying to get an algorithm to "shepherd" humanity to be better is simply Orwellian.
module0000 on June 14, 2019 | prev | next [–]
> If YouTube won’t remove the algorithm, it must, at the very least, make significant changes
It must? No, it doesn't have to do a damn thing. It's a product from a publicly traded company, therefore it "must" return value for stockholders. That means more behavior that increases ad revenue. The author is out of touch with reality. Stop feeding your kids youtube if you don't want them exposed to youtube. It's a private service(youtube), not a public park.
drewbug01 on June 14, 2019 | parent | next [–]
> It must? No, it doesn't have to do a damn thing.
Subject to the laws of the jurisdiction in which it operates, of course. We could - if we so wanted - pass laws to regulate this behavior. That is perhaps the best option, in my own opinion.
> It's a product from a publicly traded company, therefore it "must" return value for stockholders.
The dogma that it "must" return value for shareholders is not an absolute rule[1]; rather it's a set of market expectations and some decisions from Delaware (which have an outsize impact on business law) that encourage it. But it's not required. In fact, many states allow a type of corporation that specifically and directly allows directors to pursue non-shareholder-value goals - the benefit corporation[2].
> The author is out of touch with reality.
Please re-read the HN guidelines[3].
> Stop feeding your kids youtube if you don't want them exposed to youtube. It's a private service(youtube), not a public park.
This is the doctrine of "caveat emptor," essentially - that a consumer is ultimately responsible for all behavior. However, a wealth of regulation exists because that's unworkable in practice. The FDA and the EPA come to mind, but we also regulate concepts like "false advertising." Your stance here ignores the realities of life in service of ideological purism.
[1] http://web.archive.org/web/20190327123200/https://www.washin...
[2] https://en.wikipedia.org/wiki/Benefit_corporation
[3] https://news.ycombinator.com/newsguidelines.html
Nasrudith on June 14, 2019 | root | parent | next [–]
No we cannot pass laws that do that no matter how indignant we may be. The whole bloody point of the constitution is that no matter how pissed off the majority (or "the majority" which is just a noisy minority as it may be) is that you cannot simply legislate away rights.
The vague "do something!" regulation push has all of the marks of a moral panic and all participants should slap themselves hard enough to leave a mark and repeat "It is never too import to be rational."
AlexandrB on June 15, 2019 | root | parent | next [–]
Please explain what rights would be legislated away in this case. It's definitely not the 1st amendment - you can still say what you want, just not on necessarily on the platform of your choice. This was equally true in the broadcast TV days. So what other right(s) would be legislated away by regulating Youtube's content?
Nasrudith on June 15, 2019 | root | parent | next [–]
Broadcasters had the special pleading with some scintilla of a point in that there were actual shared commons to prioritize. In practice it was a fig-leaf as you never saw arguements in broadcast censorship over 'values' to wrestle over airwave ownership but instead bullshit doctrines like 'community standards'. The fact that the US has a long history of laying out rights for all, seeing the revolutionary implications and then saying 'No wait that can't be right it is too different.' and going back to the bullshit control they had before for a few centuries is a whole other sad topic.
One thing that did make it through that was the ruling that mediums which lack said limitation like cable and internet don't have the rationale for that restriction and thus the censorship that weak minds had become accustomed to vanished in a puff of logic. This has been the case since cable porn channels were a thing.
By regulating YouTube you effectively regulate what /all/ platforms may push. It isn't simply that YouTube decides that "You know what we don't want to post that." - an exercise of their collective Freedom of Association but "The government doesn't want us to post that so we can't." You can't just deputize tasks to third parties and expect the limits on exercises of power to vanish. Otherwise we'd see hordes of private detectives as a work around to Fourth Amendment rights.
Said regulations on youtube would be a major infringement upon freedom of the press and speech. Not to mention it is logically equivalent to censoring your own press is whenever it fits whatever criteria they dislike.
FabHK on June 14, 2019 | parent | prev | next [–]
No. As you yourself recognise (presumably, as you put the "must" in scare quotes and italics), that companies "must" maximise shareholder value is a goal contingent on our decisions and policies, not some natural law.
Of course, it is incumbent on us individually to behave responsibly. But there is space for public policy and regulation, even of YouTube.
fzeroracer on June 14, 2019 | parent | prev | next [–]
Incentivizing value for stockholders above all else is a good way to ensure incredibly anti-consumer practices grow in popularity. Something you might only begin to notice when your kids start getting recommended games from their friends that require you to gamble with IAP to get a new dance or something.
bryant on June 14, 2019 | prev | next [–]
This seems like it takes some notes from Veritasium's theory on YouTube's recommendation algorithm which he posted after his initial reservoir shade balls video went viral. (edited for clarity)
https://www.youtube.com/watch?v=fHsa9DqmId8 for his theory.
raz32dust on June 14, 2019 | prev | next [–]
YouTube's incentives are the best among such platforms IMO. They allow a simple profit sharing model where a part of the ad-revenue goes to the content creator. This is unlike instagram, for example, where the content creators have to peddle products in their ads to make money. Take a fitness channel for example - on YouTube, the content creator can just be honest, and the views alone will guarantee income. On the other hand, on instagram, they have to resort to selling snake oil. I love YouTube for this, and I am constantly amazed by how YouTube has become a livelihood earner for so many.
kauffj on June 14, 2019 | prev | next [–]
http://archive.is/6lbCR
(Archive link for those who prefer non-broken web experiences)
everyoneisbias on June 14, 2019 | prev | next [–]
It's all about advertising money. TV and newspapers are dying and they need someone to blame.
restingrobot on June 14, 2019 | parent | next [–]
I personally think this has deeper political motives as well, but yes I completely agree with you!
jaydz on June 14, 2019 | parent | prev | next [–]
I'm sure Google and Facebook understand this, hopefully they won't cower any further. Big Media wants its "fair share" and they will keep attacking until they do.
Analemma_ on June 14, 2019 | prev | next [–]
I don't know if YouTube's problems are so bad that the argument applies in this case, but in general, "We can't comply with this regulation, it would be too difficult at our scale" is not considered a valid defense. Just as banks shouldn't be allowed to get so large that they can't fail without wreaking havoc on the economy, if algorithmic recommendation and moderation can't work, then maybe social networks shouldn't be allowed to get so large that human moderation is not possible.
restingrobot on June 14, 2019 | parent | next [–]
That is an apples to oranges comparison, Youtube is a platform not an institution. It is open to all videos, provided they meet certain agreed upon guidelines, and should not be responsible for censoring content based on individual opinions.
I don't think that the recommendation is broken at all, in fact it works astonishingly well for the vast majority of people. The fact that there are a few bad actors is also present in the banking industry, (Wells Fargo for instance), to use your own bad comparison.
munk-a on June 14, 2019 | root | parent | next [–]
YouTube is asserting editorial and publishing rights when it promotes certain videos, if it were a pure video hosting site (providing a link to uploaded videos for people to do with as they please) then I'd agree they were just a platform, but a newspaper isn't a platform and neither is YouTube.
restingrobot on June 14, 2019 | root | parent | next [–]
Youtube is asserting on behalf of people who own the publishing rights and not on behalf of themselves. This is an important distinction. Youtube is not the same as a Newspaper in any way shape or form, I don't really understand your comparison.
skybrian on June 14, 2019 | parent | prev | next [–]
The queue for getting your video posted on YouTube would grow infinitely. (Or, more realistically, people would give up and not bother once it takes years.)
But I guess they could charge money to get to the head of the line?
ilikehurdles on June 14, 2019 | root | parent | next [–]
The queue for having your video uploaded and public does not at all have to be the same queue for getting your video included in others' recommendations.
nostrademons on June 14, 2019 | root | parent | next [–]
I can just see the outrage now: "YouTube running a pay-to-play scheme for exposure. Anyone can upload their video, but only the rich can get an audience!"
Come to think of it, this is basically the complaint against AdWords and the gradual takeover of the search result page by paid results.
shearskill on June 15, 2019 | root | parent | next [–]
This is exactly what happens. Prager U and Ben Shapiro advertise heavily on content adjacent to them (gaming) and their views go up, up they go in the algorithm.
hrktb on June 14, 2019 | root | parent | prev | next [–]
There could be a middle ground where videos have limited visibility until getting vetted, or a karma system to fast track regular uploaders etc.
I think there’s a ton of ideas to be tried.
PretzelFisch on June 14, 2019 | root | parent | prev | next [–]
That's not true you can upload a video and not allow it to be recommended until some human review was done. Most youtube channels don't need the recommendation engine.
LocalPCGuy on June 14, 2019 | root | parent | next [–]
That just isn't feasible. Videos would literally take years to get into the recommended status - another comment pointed out there are 500 new videos uploaded per SECOND.
munk-a on June 14, 2019 | root | parent | next [–]
If there was one dude, sure. But apparently YouTube is in the business of supporting the upload of 500 videos/second so they need to deal with the consequences of it. It's not like there's any regulation forcing them to be the place everyone uploads videos to and there are some valid competitors (though they're far less into the publishing/editorializing facet - vimeo is much more often direct linked for instance)
jerf on June 14, 2019 | root | parent | next [–]
To be clear, I am not speaking for anybody in this thread but myself.
But I will unapologetically and forthrightedly say that, yes, if we're going to assert that YouTube has certain responsibilities for the nature of the videos that it hosts, and that it turns out that the nature of those responsibilities is such that YouTube can't possible meet them, then, yes, YouTube as we know it should be essentially shut down, at least going forward.
I am NOT going to say we should deliberately craft the responsibilities in such a way that YouTube is deliberately shut down. However, if it turns out that they are incapable of applying even the bare minimum effort that we as a society deem it necessary for them to apply, then, yes, it is absolutely a consequence that YouTube as we know it today may have to be so radically altered as to be a different site entirely.
In the general case, when the law requires certain obligations of you as a business, and you as a business can not meet them, that does not mean that suddenly those obligations are not applied to you. It means that your business is not legally viable, and needs to change until it is. It may be the case that there is no solution to being legally viable and being profitable, in which case, your business will cease to exist. Just as there is, for instance, no solution to being a business built around selling torrent files containing unlicensed commercial content to people. You can't defend yourself by saying you can't afford to get the licenses; your suitable legal remedy was to never have started this business in the first place. There's some concerns around grandfathering here to deal with, certainly, but they can still be affected going forward.
There is no guarantee that there is a solution where a company exerting whatever minimal control they are obligated to assert by society is capable of growing to the size of YouTube. If that is the case, so be it. The solution is not to just let them go because they happened to grow fast first.
(My solution to freedom of expression is an explosion of video sites, where each of them has ways of holding the videos to the societally-mandated minimum standard, and no one site can do it all because they simply can't muster the resources to be The One Site, because as they grow larger they encounter anti-scaling effects. Given how increasingly censorious Silicon Valley is becoming, as we are now into censoring the discussions about censoring discussions like the recent removal of Project Veritas from Twitter for its discussion of Pinterest censoring pro-life films, I expect this to increase the range of expression, not diminish it.)
nostrademons on June 14, 2019 | root | parent | next [–]
Not speaking on behalf of what I want, but on behalf of what is true:
> It may be the case that there is no solution to being legally viable and being profitable, in which case, your business will cease to exist.
Or your business will exist illegally.
There's this interesting interplay between law and economics, where law is generally taken as a prerequisite for frictionless commerce, and yet at the same time if activities that large groups of people wish to partake in are made illegal, the market just routes around them and black markets spring up to provide them. Prohibition. The War on Drugs. Filesharing. Gambling. Employing illegal immigrants. Usury. Short-term rentals. Taxi medallions. Large swaths of the economy under communism.
There are a couple other interesting phenomena related to this: the very illegality of the activity tends to create large profits around it (because it creates barriers to entry, such that the market often ends up monopolized by a small cartel), and the existence of widespread black markets erodes respect for rule of law itself. When people see people around them getting very rich or otherwise deriving benefit from flouting the law, why should they follow it?
Switching to editorializing mode, I think that this gradual erosion of respect for law to be quite troubling, and I also think that the solution to it needs to be two-fold: stop trying to outlaw behaviors that are offensive to some but beloved by others, and start enforcing laws that if neglected really will result in the destruction of the system.
jerf on June 17, 2019 | root | parent | next [–]
"Or your business will exist illegally."
True.
In the context of this particular case, I was assuming that nothing the current size of YouTube could exist illegally, as that would imply that whatever authority was declaring them "illegal", but not capable of doing anything about it despite it nominally living in its jurisdiction, must be anemic and impotent to the point of being nearly non-existent.
There's already an underground proliferation of video sites, spreading copyrighted content out of the bounds of what the rightsholders want, so it's pretty much assured we'd end up with illegal alternatives. :)
andromeduck on June 14, 2019 | root | parent | prev | next [5 more]
PretzelFisch on June 14, 2019 | root | parent | prev | next [–]
Some of that can be alleviated by trusted publishers, ie fox,cbs,abc... Won't need a review. Introduction of a paid queue. Just because they don't want to do it today doesn't mean it's an impossible solution just a hard one.
Nasrudith on June 14, 2019 | root | parent | next [–]
That sounds like the exact shit people left TV for. Lets not recreate television oligarchies for the sake of those afraid of change.
v7p1Qbt1im on June 18, 2019 | root | parent | prev | next [–]
> Most youtube channels don't need the recommendation engine.
This is just not true. A massive part of the views originate from recommended/up next. Ask pretty much any creator. Only the core audience of a channel will have the notification bell on for a specific channel. Many users don't check the Subscription section and either link in from an external source, know beforehand what they want to search for or just watch what pops up in recommended.
peteretep on June 14, 2019 | parent | prev | next [–]
> but in general, "We can't comply with this regulation, it would be too difficult at our scale" is not considered a valid defense
This is a great point that I was going to phrase slightly differently: if YouTube is too large to be able to prevent harm, YouTube needs to be regulated. YouTube get the benefit of being so large, so they should also get the cost.
aaomidi on June 14, 2019 | parent | prev | next [–]
Agree with you. If you can't do your job then maybe you'll have to be shut down.
andromeduck on June 14, 2019 | root | parent | next [–]
Since when did it become YouTube's responsibility to police speech?!
yoz-y on June 14, 2019 | root | parent | next [–]
Disclaimer: I work for YouTube, my personal view on the situation is this:
Bear in mind that YouTube does not operate only in the US with unhinged free speech laws. Many countries have stricter laws and YouTube definitely needs to comply with them.
Other than that, adpocalypse happened because of bad videos being surfaced by the algorithm so another responsibility is to the creators. (And shareholders)
There is nothing to be gained by having crap in your backyard.
aaomidi on June 15, 2019 | root | parent | prev | next [–]
It did when people started demanding it. A company doesn't exist in a vacuum.
tomgp on June 14, 2019 | root | parent | prev | next [–]
When they started making editorial decisions about which videos to promote and to whom -albeit via an automated process.
nradov on June 14, 2019 | parent | prev | next [–]
YouTube needs no defense in this case because video recommendations are protected free speech. In the US at least it would be impossible to outlaw video recommendations in a way that would pass Constitutional review.
sneakernets on June 14, 2019 | prev | next [–]
In addition to this, seeing content creators being slaves to the algorithm is an eye-opening experience. Especially when it comes to the children's videos. It's all computer generated garbage powered by responses to changes in algorithms. If kids suddenly watch more content with alligators, prepare for that being the only thing created, recommended or playing. It's wild.
tunesmith on June 14, 2019 | prev | next [–]
Still looking for recommendations that are influenced by people's best-self intentions of who they want to be, rather than influenced by their worst-self behaviors.
edoo on June 14, 2019 | prev | next [–]
We know we have to keep kids sheltered from people which may have unscrupulous neural networks at play looking for a way to further their own goals at the expense of a child's well being and overall health.
Engagement on the internet is also being driven by neural networks that are learning to adapt to the users brain chemistry to statistically modify behavior for maximum engagement/profit. Perhaps it is time to realize that these services are going to be analogous to a random stranger offering your kid candy for their own twisted goals that are unlikely compatible with a child's well being. If you consider a service like YouTube as an untrusted source of interaction perhaps you'll be as likely to block or monitor it the same as random chat rooms.
toss1 on June 14, 2019 | prev | next [–]
YouTube can be a source of astonishingly great education and entertainment that will help grow society, as well as astonishingly horrid corruptions of nearly anything that will corrode society at it's roots.
Most of these discussion posts seem to miss the point that 'engagement' or 'upvotes' does NOT equal value.
Also missing is the concept that a company with a massive platform has any social responsibility to at least not poison the well of society.
And claiming "it's the parent's responsibility" may have some truth value, but it does not and should not be an excuse to absolve the platform owner of responsibility.
The key to longer term success of the platforms is to abandon the low-hanging-fruit of "engagement" as a measure value and develop more substantitive metrics that actually relate to value delivered, both to the individual watcher and society as a whole.
As one audience member, I find their recommendations to be basically crap, nearly never leading me to something more valuable than what I just watched (sure, they'll occasionally put up a recommendation that has enough entertainment value to watch, but much of the time I want my 5min back). To find any real value, I need to search again. That already tells us that their "engagement"-based algos are insufficient to serve the needs.
gersh on June 15, 2019 | prev | next [–]
I think there is an inherent in optimizing for retention time. Ideally, the recommendation should help find stuff which improve people's health, make them happier, or more informed about the world. However, it doesn't seem like YouTube has metrics on those things. Furthermore, things like that probably can't be determined very quickly on new content.
diogenescynic on June 14, 2019 | prev | next [–]
I mostly watch movie reviews on YouTube and I'm constantly being recommended either weird Joe Rogan, alt-right content, or makeup videos. I don't get it. I've never clicked or watched anything remotely associated with it. I suspect a lot of the popular YouTube channel's are gaming the algorithms or SEO their videos to get more recommendations.
restingrobot on June 14, 2019 | parent | next [–]
The neural network takes into account videos that other people who watched the same one you did watched. It's quite possible that the movie trailer you watched was popular among demographics that also watched those recommendations. If you don't have a lot of data for yourself, you will see a heavier bias towards other people's videos.
JDiculous on June 18, 2019 | prev | next [–]
The "wrong behavior" that Youtube incentives is promoting and encouraging clickbait garbage content (just look at the default homepage). The holy metric now is "watch time", the result being that creators stretch out their content to 10 minutes because then Youtube is more likely to promote it (and midroll add = twice the revenue). Yesterday Youtube recommended me a 10 minute video of some guy explaining how he made this simple drone shot that could've been condensed down to a single sentence - "Turn sensors off". What a waste of time.
But hey they're a corporation and thus have no accountability to the public good.
umvi on June 14, 2019 | prev | next [–]
Does the algorithm incentivize bad behavior or simply reflect the desires of the viewers?
Someone watching lots of DIY home repair videos will start seeing more. In that case it seems like it's incentivizing good behavior. Likewise, someone watching lots of soft porn on YouTube will be recommended more soft porn.
Looks like there’s a problem with this site
https://chatgpt.com/?q=I%u2019m on page “Post: Edit” with “wyoh on June 14, 2019 | prev | next [2 more]%0A%0A%09%0Aalt_f4 on June 15, 2019 | prev | next [–]%0A%0Ayet another NYT anti-tech hit piece%0A%0A%09%0Aannadane on June 14, 2019 [flagged] | prev | next [19 more]%0A%0A%09%0Ailikehurdles on June 14, 2019 | prev | next [–]%0A%0AGoogle absolutely can do all of those things without an algorithm. What they can't do is accomplish that without impacting profit margins (or at the minimum, executive bonuses). "If it impacts business as usual, then it is impossible" is a naive/flawed/libertarian stance.%0A%0A%09%0Axondono on June 14, 2019 | parent | next [–]%0A%0AYou do realize that to cover current needs (400h uploaded every minute), YouTube would need to employ more than 72000 people working full time right?%0A%0A%09%0Aemmp on June 14, 2019 | root | parent | next [–]%0A%0AAnd these people would inevitably make some number of mistakes in categorization too, or miss something, or just be unable to quite hit some baseline universal standard that doesn't upset a group. Then YouTube still gets the bad press.%0A%0A%09%0Abjourne on June 14, 2019 | root | parent | prev | next [–]%0A%0ABut 99.9% of all videos uploaded never gets more than a few handfuls of views so those are irrelevant. Of the remaining 0.1%, you don't need to watch every second of every frame - speeding it through at twice the speed should be doable. So by your own calculations, 72 000 * 0.001 * 0.5 = 36 people working full time.%0A%0A%09%0Axondono on June 15, 2019 | root | parent | next [–]%0A%0AYou can set that 0.001 factor as big or as low as you like, but then we%u2019d get the same nytimes hit piece saying this is intentionally being done by humans.%0A%0A%09%0Athrowaway287391 on June 14, 2019 | parent | prev | next [–]%0A%0AYou made me curious so I did some back-of-the-envelope math. An average of 576K hours of video is uploaded to YouTube every day [1], which is 4.032M hours per week. If the reviewers watch all the video at 1x speed and work 40 hours per week, you'd need about 100K reviewers to do the job. (This is just to watch the video -- not including any additional work done to annotate the video with whatever information you want out of your reviewers.) If each one costs $30K a year (probably a lowball estimate including salary, insurance, etc.) it would cost a total of $3B per year. YouTube makes $4B in revenue per year and roughly zero profit AFAICT, so there's no way this is feasible.%0A[1] https://www.quora.com/How-many-videos-are-uploaded-on-YouTub...%0A%0A%0A%09%0Ascarface74 on June 14, 2019 | parent | prev | next [–]%0A%0AI%u2019m usually a proponent of the “wall garden” when it comes to applications and strict sandboxing for most users, since software can harm your computer.%0ABut in the case of YouTube, there is absolutely no way that they can curate it and it still being as open as it is.%0A%0A%0A%09%0Ailikehurdles on June 14, 2019 | root | parent | next [–]%0A%0AThere is no need to curate every video, only the ones qualified enough to be recommended/showcased to the public who is not explicily looking for them.%0A%0A%09%0Apatorjk on June 14, 2019 | root | parent | next [–]%0A%0ASay I watch a video on a topic like "nes video game speed running". Right now I'd see other nes video game speed running videos, it's very useful. In a curated world, what would be recommended? It's probably too much of a niche topic to yield results that would be very useful.%0A%0A%09%0Absder on June 14, 2019 | root | parent | prev | next [–]%0A%0A> But in the case of YouTube, there is absolutely no way that they can curate it and it still being as open as it is.%0ASo?%0A%0AIf YouTube exits the space and allows oxygen back into the video sharing market, we might actually get some different video sharing services that do different things (a la NicoNicoDouga).%0A%0A%0A%09%0Ascarface74 on June 14, 2019 | root | parent | next [–]%0A%0AVideo streaming, processing, and storage at scale still costs a lot of money. I don%u2019t think even Google is doing it profitably.%0A%0A%09%0Aphysics515 on June 14, 2019 | parent | prev | next [–]%0A%0AYouTube does human curation already. They are refered to as "playlist" and every user has the ability to create and share them. So what you are asking for is Google to create their own playlist? Would this also entail removing that ability from other users?%0A%0A%09%0Aemilfihlman on June 15, 2019 | prev | next [–]%0A%0AI mean PewDiePie's info is rather public but what's with the need to "dox" him right in the beginning?%0A%0A%09%0A0815test on June 14, 2019 | prev | next [–]%0A%0AQuite true, but let's not pretend that Twittr, Tumbler and Fakebook aren't also "incenting" all sorts of distorted behaviors of their own! These sites are "algorithms" all the same, even if the workings of these algorithms are in some ways more transparent. We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case.%0A%0A%09%0Ankozyra on June 14, 2019 | parent | next [–]%0A%0A> We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case.%0AThis always sounds good, but decentralized is nearly impossible to commoditize or make appealing to the general public. Outside of evangelism and word-of-mouth, how are people going to escape the Youtube advertising budget and instead choose - en masse - the product that is better for their privacy?%0A%0AThere's just so much money and inertia to fight.%0A%0A%0A%09%0Aswiley on June 14, 2019 | root | parent | next [–]%0A%0AYouTube removing harmless content over copyright etc is one way.%0A%0A%09%0Aphreeza on June 14, 2019 | parent | prev | next [–]%0A%0AIf the ranking algorithm is open for all to see, won't that encourage even worse gaming of the system? I am trying to think of comparable situations in existing open systems, but none come to mind.%0A%0A%09%0Athephyber on June 14, 2019 | parent | prev | next [–]%0A%0A> We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another%0AWe already have them, yet FB, IG, Twitter, YT are the social media behemoths.%0A%0AAre you making a plea for the average internet person to care about the values of the platforms they use over the platform content? You are likely preaching to the choir here on HN, but I would guess that the audience here is only 1% of 1% of the audience you need to message.%0A%0ACorps make good use of psychological experiments to optimize their utility function. "Evil is efficient." The problem is that companies optimize for money without taking into account any other factor in any significant way.%0A%0A> In 1970, Nobel Prize–winning economist Milton Friedman published an essay in The New York Times Magazine titled “The Social Responsibility of Business Is to Increase Its Profits.” [1]%0A%0AArguably this quote incentivized the destruction of "good corporate citizenship" (although I admit it's possible that concept never existed in a broad sense).%0A%0A[1] https://www.newsweek.com/2017/04/14/harvard-business-school-...%0A%0A### Summary of Discussion%0A%0A#### **Call for Federation**%0A%0A* Advocacy for open, federated platforms like Mastodon, Matrix, and ActivityPub.%0A* Goal: Let users easily switch platforms if they dislike one algorithm.%0A%0A#### **Current Reality**%0A%0A* Despite federation tech existing, centralized platforms (FB, IG, Twitter, YT) dominate.%0A%0A#### **Audience Concern**%0A%0A* Poster questions if the message is aimed at everyday users or just the tech-savvy.%0A* HN community is niche—far from the mass audience needed for change.%0A%0A#### **Corporate Incentives**%0A%0A* Corporations optimize for profit using psychological tactics.%0A* "Evil is efficient" — optimizing without ethical constraints works well.%0A%0A#### **Philosophical Note**%0A%0A* Reference to Milton Friedman%u2019s 1970 essay asserting profit as the sole responsibility of business.%0A* Implication: This mindset may have undermined ethical corporate behavior.%0A%0A%09%0Atqi on June 14, 2019 | parent | prev | next [–]%0A%0AI think the author's issue is not that her recommendations are bad, but that other people are getting recommendations for things she disagrees with (ie conspiracy theory videos, child-unsafe content, etc). So I don't think she would view decentralization as a win.%0A%0A%09%0Ajiveturkey on June 14, 2019 | prev [–]%0A%0AWow, I'm conflicted. First, an obvious idiot statement, which helps us ground our analysis:%0A> Human intuition can recognize motives in people%u2019s viewing decisions, and can step in to discourage that — which most likely would have happened if videos were being recommended by humans, and not a computer. But to YouTube%u2019s nuance-blind algorithm — trained to think with simple logic — serving up more videos to sate a sadist%u2019s appetite is a job well done.%0A%0ASo this person is advocating that a human (ie, another human besides oneself, an employee at youtube), have access to the click stream of individual users? This proposal, in 2019??? Of course this would have to be compulsory to be effective. Why would I want a megacorp to be making moral decisions for me? I'm ok with them making amoral algorithmic decisions.%0A%0AThe author is generalizing the problem of YT Kids, which should be human curated, to all of youtube.%0A%0AOTOH, yeah feeding our worst impulses is kind of a problem. But, tweaking the algorithm isn't the solution. The machine itself is designed to thrive on attention.%0AsRGB Intent: 0%0AX PixelsHere is the translation of the provided text to English:%0A%0A---%0A%0APer Meter: 3,780%0AY Pixels Per Meter: 3,780%0A%0A---%0A%0AThe rest of the text appears to be HTML code, which doesn't contain meaningful content to translate. If you need assistance with something specific in the HTML or any other part, please let me know! style="float: right;">