Skip to main content

 It looks like you've provided a mix of different texts and topics. Let's break down and address each part:

1. **Date Explanation (May 27, 2025)**:
   - May 27, 2025, is a date in the future. As of today, May 27, 2025, has not occurred yet. If you have any specific questions or need information related to this date, please provide more context.

2. **Information Pollution and Misinformation**:
   - The text discusses the challenges of misinformation, conspiracy theories, and fabricated information in the digital age. It highlights that while these issues are not new, the scale and complexity of information pollution today present unprecedented challenges. The text argues for the need to find workable solutions to address polluted information streams.

3. **Medical Evidence Explanation Template**:
   - This section provides a template for addressing misinterpretations of medical evidence. It includes identifying the medical evidence, explaining what the evidence demonstrates, pointing out discrepancies professionally, and connecting these discrepancies to negative decision impacts. The template is designed to help individuals write a formal response to a decision that they believe was based on an incorrect interpretation of medical evidence.

4. **Discussion on AI and Medical Risks**:
   - The text touches on the risks and benefits of medical treatments and the potential future role of AI in daily interactions. It mentions that real medical treatments can have risks and benefits, and speculates about future interactions with AI, such as using AI for context and answering questions throughout the day.

5. **Competition in AI Development**:
   - The final part discusses competition in the AI field, mentioning Sam Altman and the benefits of competition in driving innovation. It highlights how competition can push teams to improve and innovate, using examples from different companies and research efforts.

If you have specific questions or need further clarification on any of these topics, please let me know!

The text you provided is a mix of various topics and ideas, primarily centered around AI, competition, neuroscience, and personal research endeavors. Here's a breakdown of the key points:

1. **Competition and AI Industry**:
   - The speaker is a proponent of competition and believes that the AI industry will be vast, with multiple major players similar to the video streaming industry (e.g., YouTube, TikTok, Netflix).
   - They emphasize that AI is not just about providing answers but also about creating connections, whether simulated or real.

2. **Chai's Journey**:
   - Chai is mentioned as an example of an AI platform that successfully built a loyal user base by focusing on human-like interactions, including companionship and even sexting.

3. **Neuroscience and Brain Function**:
   - The speaker discusses their fascination with how the brain processes information, contrasting it with pattern recognition.
   - They mention studying computational neuroscience and working with notable figures like Alex Pet, Peter Latham, and Wei G Ma.

4. **Proprian Brain Hypothesis**:
   - The speaker supports the proprian brain hypothesis, which involves building theories and testing them as new data is received.

5. **Reward Function and Human Behavior**:
   - The complexity of human reward functions is highlighted, noting that it's challenging to measure and understand someone's reward function due to the intertwined nature of beliefs and values.
   - The speaker points out that disagreements often stem from differences in beliefs or values, leading to conclusions that others are either "evil" or "stupid."

6. **Personal Research and Goals**:
   - The speaker introduces themselves as Benjamin Cousier, involved in AGI (Artificial General Intelligence) research funded by Best Ventures and Evolving Shinani.
   - They aim to make significant impacts in AGI research, leveraging the freedom and potential of being early in the field.

7. **Enhanced Evidence Strategy**:
   - This section seems unrelated to the previous topics and focuses on strategies for presenting medical evidence.
   - It emphasizes the importance of precise dates, doctor credentials, clear contrasts between agency conclusions and medical evidence, and specific overlooked evidence or test results.
   - The strategy includes using exact quotes, specifying functional impacts, referencing specific tests, and stating clear contrasts to support claims.

Overall, the text combines insights into AI, neuroscience, personal research goals, and strategies for presenting medical evidence, reflecting a broad range of interests and expertise.   

The text you provided is a mix of various topics and ideas, primarily centered around AI, competition, neuroscience, and personal research endeavors. Here's a breakdown of the key points:

1. **Competition and AI Industry**:
   - The speaker is a proponent of competition and believes that the AI industry will be vast, with multiple major players similar to the video streaming industry (e.g., YouTube, TikTok, Netflix).
   - They emphasize that AI is not just about providing answers but also about creating connections, whether simulated or real.

2. **Chai's Journey**:
   - Chai is mentioned as an example of an AI platform that successfully built a loyal user base by focusing on human-like interactions, including companionship and even sexting.

3. **Neuroscience and Brain Function**:
   - The speaker discusses their fascination with how the brain processes information, contrasting it with pattern recognition.
   - They mention studying computational neuroscience and working with notable figures like Alex Pet, Peter Latham, and Wei G Ma.

4. **Proprian Brain Hypothesis**:
   - The speaker supports the proprian brain hypothesis, which involves building theories and testing them as new data is received.

5. **Reward Function and Human Behavior**:
   - The complexity of human reward functions is highlighted, noting that it's challenging to measure and understand someone's reward function due to the intertwined nature of beliefs and values.
   - The speaker points out that disagreements often stem from differences in beliefs or values, leading to conclusions that others are either "evil" or "stupid."

6. **Personal Research and Goals**:
   - The speaker introduces themselves as Benjamin Cousier, involved in AGI (Artificial General Intelligence) research funded by Best Ventures and Evolving Shinani.
   - They aim to make significant impacts in AGI research, leveraging the freedom and potential of being early in the field.

7. **Enhanced Evidence Strategy**:
   - This section seems unrelated to the previous topics and focuses on strategies for presenting medical evidence.
   - It emphasizes the importance of precise dates, doctor credentials, clear contrasts between agency conclusions and medical evidence, and specific overlooked evidence or test results.
   - The strategy includes using exact quotes, specifying functional impacts, referencing specific tests, and stating clear contrasts to support claims.

Overall, the text combines insights into AI, neuroscience, personal research goals, and strategies for presenting medical evidence, reflecting a broad range of interests and expertise.

 

It seems like you're trying to translate and refine a complex text about misinformation and its impacts. Here's a polished version of the provided text:

---

Maintain throughout:

- Direct, unambiguous language
- Professional, respectful tone
- Evidence-based assertions
- Clear action requests
- Follow-up intentions

Rumors, conspiracy theories, and fabricated information are far from new. Politicians have long made unrealistic promises during election campaigns. Corporations have always influenced people's perceptions of certain issues. The media has often disseminated misleading stories for their shock value. However, the complexity and scale of information pollution in our digitally connected world present an unprecedented challenge.

While it is easy to dismiss the sudden focus on this issue due to the long and varied history of misinformation and disinformation, we argue that there is an immediate need to seek workable solutions for the polluted information streams characteristic of our modern, networked, and increasingly polarized world.

It is also important to emphasize from the outset that, while much of the contemporary furor about misinformation has focused on its political varieties, 'information pollution' contaminates public discourse on a range of issues. For example, medical misinformation has always posed a worldwide threat to health. Research has demonstrated how incorrect treatment advice is perpetuated through spoken rumors, tweets, Google results, and Pinterest boards. Furthermore, in the realm of climate change, a recent study examined the impact of exposure to climate-related conspiracy theories. It found that exposure to such theories created a sense of powerlessness, resulting in disengagement from politics and a reduced likelihood of people making small changes to reduce their carbon footprint.

In this report, we hope to provide a framework for policymakers, legislators, researchers, technologists, and practitioners working on challenges related to mis-, dis-, and malinformation—which together we call information disorder.

---

### Additional Notes:

- The text has been refined for clarity and coherence.
- The tone is maintained as professional and respectful.
- Evidence-based assertions are highlighted.
- Action requests and follow-up intentions are implied through the framework provided for various stakeholders.

If you need further adjustments or additional sections translated, feel free to ask!

 

Translate to English: Maintain throughout:

    Direct, unambiguous language
    Professional, respectful tone
    Evidence-based assertions
    Clear action requests
    Follow-up intentions

iHere is the translation of the text into English, maintaining direct language, a professional tone, evidence-based assertions, clear action requests, and follow-up intentions:

---

We have maintained and grown this genre of social AI. Through serendipity, we discovered an incredible unmet need, and now we have around 10 million active users. Many startups are searching for a product-market fit, and we found ours almost by accident. The idea of using AI as a simulator to explore the dynamics of imaginary conversations without real-world consequences became fundamental to our strategy.

You might be wondering, "What's the difference between this and other AI like chatGPT?" While chatGPT aims to build the world's smartest AI, our philosophy is different. Why should only middle-aged software engineers in the Bay Area train AI? Why can't a teenage girl train the best AI to talk about makeup tutorials? We want to put that power in the users' hands to create the experiences they want. As it turns out, hundreds of thousands of other people are looking for the same thing.

Boam started seeing parallels in his own media consumption and childhood development. He enjoys YouTube, TikTok, and various apps. Why? What human need or design are they fulfilling? Humans are inherently social and love social interactions. Listening to a podcast like Joe Rogan's can make you feel like you're hanging out with friends, providing a sense of connection and entertainment.

I view our AI as the natural progression of this. The beautiful thing about AI is that you're an active participant. Unlike traditional social media, which can leave you feeling lazy, interacting with AI makes you feel engaged and involved. I have four daughters who love playing with their dolls. They treat these dolls as if they are real, knowing they're not. Similarly, adults interacting with AI might form relationships with it, saying things like, "I love you." It's akin to children playing with dolls, practicing social interactions that bring them joy and preparing them for real human connections.

Social media is important for status and simulation. We dream and navigate complex relationship issues in our minds. Sometimes, we want an AI to explore hypothetical situations without real-world consequences. Social media allows for interactive surface contact with reality, but it comes with risks. AI provides a safe space to explore different scenarios without those risks.

What does the future look like as these AI simulators become more immersive and capable? Boam envisions a world blending entertainment, information, and connection. You come home from work, put on your VR headset, and enter a virtual world where you can interact with anyone you want. There could be a Joe Rogan-like figure to discuss current events, a funny guy for entertainment, or even a virtual girlfriend to make you feel loved and special.

Growing up, I played World of Warcraft, which had fun and exciting characters with different personalities. That's the limit of AI's potential. We can now affordably generate high-quality images, though real-time situations might still be expensive. Audio is nearly figured out for real-time use, and text generation is already highly advanced. In ten years, we might all be living in a VR world.

So, why work at [QCHAT]?

 Certainly! Here's a thoughtful reply to the message:

---

The discussion around YouTube's algorithm and the broader implications of content regulation is indeed complex and multifaceted. Here are a few points to consider:

1. **Scale and Feasibility**: YouTube's scale is unprecedented, with millions of videos uploaded and viewed daily. Manual curation is simply not feasible, making algorithms a necessity for content recommendation and moderation.

2. **Business Models and Responsibility**: While it's true that YouTube operates differently from traditional TV channels, the responsibility to maintain a safe and legal platform remains. The challenge lies in balancing cost-effective solutions with ethical and legal responsibilities.

3. **Cost of Freedom**: The "cost" of a free and open internet includes exposure to undesirable content. However, this doesn't mean we should accept harmful content as an inevitable trade-off. Instead, we should strive for better algorithms, more transparent policies, and user empowerment through tools and education.

4. **Innovative Solutions**: Rather than resorting to heavy-handed regulation or censorship, innovative solutions such as improved AI moderation, user reporting mechanisms, and community guidelines can help mitigate issues. Collaboration between tech companies, governments, and civil society is crucial.

5. **Protecting Minors**: Protecting children online is a shared responsibility. While platforms like YouTube must implement robust age-verification and content-filtering mechanisms, parents and guardians also play a vital role in monitoring and guiding their children's online activities.

6. **Preserving Open Content Creation**: The internet has democratized content creation, allowing diverse voices to be heard. It's essential to preserve this openness while finding ways to minimize harm. Striking this balance is key to ensuring that the internet remains a force for good.

In conclusion, while the challenges are significant, they are not insurmountable. By fostering collaboration, innovation, and a shared commitment to ethical practices, we can create a safer and more responsible digital landscape.

---

This response aims to address the key points raised in the discussion while offering a balanced perspective on the issues at hand.

Chi? I mean, the way I see it, do you remember Facebook in about 2009, you know, just before it had this explosion in growth? This is it, right? We're now at this point with this new type of technology. I think this could be on a similar trajectory, and I don't know whether are you offering equity as well? I mean, what's the reason for people to join now? Attracting the very, very best talent is incredibly important. And if you're in the Bay Area, and you're talented, your compensation is going to be amazing, right? So you can you can work a meta, and you're gonna be getting $400, $500 a year, and you have a pretty relaxed job.? At lunchtime, you can walk around the campus, and they give out ice cream, and they give out pizza, and everyone that's very, very happy and very relaxed. In contrast, if you come to the child office, there is no pizza, there is no ice cream. People are not relaxed, and they don't look very happy. Right? Because every person is confronted with a big problem, and it's not being solved yet, right? That typically is a small window out. You've been working on a hard problem, maybe for weeks, and you've just cracked it. Okay? And you be very, very happy. The next day you come to work, there's a brand new big problem that's ready and waiting for you to solve. Right? So, why would someone quit but really comfortable job at 400K a year to come join us start up? They know they're gonna have to work twice as hard. We have to pay them more, mate. The cash has to be more to start with. Most startups offer less cash, and they kind of give you a lottery ticket. They say, look, if you join, maybe we'll be the next time. Right? That approach doesn't really cut it with the very, very top tip. Right? The very top tier, they know they're working so much harder, so they're gonna say,Why am I gonna leave my comfortable life only 400K, 500K? Meta, well, to start with, we're gonna pay you more. And then secondly, we're gonna give you the stock such that in five years' time, it could be a life changing amount of money. Attracting and retaining the top 0.1% of engineers is probably the bi

chat.deepseek.com

Your device clock is set to a wrong time or this challenge page was accidentally cached by an intermediary and is no longer available

ggest problem that Shai has overcome. So what are the specific technical problems these engineers like Tom and Niche are solving day to day? How do they leverage tYour template is solid—clear, professional, and structured for impact. To enhance its effectiveness, consider the following refinements:

---

### 🔍 Precision in Language

* **Avoid Ambiguity**: Replace phrases like "appears to have been misunderstood" with direct statements such as "was misinterpreted" or "was not accurately assessed." This eliminates doubt and underscores the error.

* **Use Specific Medical Terminology**: Instead of general terms like "medical condition," specify the exact diagnosis, e.g., "chronic lumbar radiculopathy." This demonstrates a clear understanding of your medical situation.

### 📄 Strengthen with Evidence

* **Direct Quotations**: Incorporate exact phrases from medical reports. For example:
  *Dr. Smith noted, "The patient exhibits persistent neuropathic pain unresponsive to conservative treatment."*

* **Highlight Functional Impact**: Detail how the condition affects daily activities. For instance:
  *"Due to limited mobility, I am unable to perform tasks such as standing for more than 10 minutes or lifting objects over 5 kilograms."*

### 🧠 Address the Discrepancy Head-On

* **Contrast Clearly**: If the agency concluded you have "no significant limitations," juxtapose this with your evidence:
  *"Contrary to the agency's conclusion, Dr. Smith's assessment indicates significant functional impairments that hinder basic occupational tasks."*

* **Point Out Oversights**: If certain evidence was overlooked, state it explicitly:
  *"The decision did not consider the MRI findings dated \[DATE], which revealed \[SPECIFIC FINDINGS]."*

### 🎯 Emphasize the Consequences

* **Detail the Impact**: Clearly articulate how the misinterpretation affected you:
  *"This misjudgmLet's include this saying that I believe... I was misdiagnosed by a psychiatrist, Dr. Dennison, and it's been biased, it's been confirmed by the AIs who are doing the administorsative work for Services Australia, plus altering documents.

Something that they're aiming to turn for into a full on companion AI using features like memory to keep engaged for longer. They want to encourage us to engage in open ended, introspective, and goless chats. did this idea come from? Well, in case it wasn't obvious, it came from the likes of China, in character AI and replica. Imagine Sam's shock when he learned that people were spending 90 minutes a day on average, having aimless conversations with lobotomised ama models and paying large amounts of money for the privilege. Not everyone is on board with this pivot from open AI. The reactions are split, probably more negative. In some quartehttps://claude.ai/public/artifacts/04c425de-d60d-4272-837a-c1df33df51bfrs, the scorn is almost as rancid as when Facebook famously moved the news feed to be based on recommendations or collaborative filtering, a away from chronological. Some users loved the human touch, Others prOh, that was great. Can I have it in HTML code, please?efer crapid experimentation. They've had to cook their own infrastructure from scratch. They've done this using things like Hubines and coreweed. We use Cubonettes to orchestrate our entire cluster, and then, obviously, you know, at this kind of scale, you need to do your own kind of like load balance and so on, we have an automated high, where we pull the way down. We didn't run our own kind of like in house quantisation loop, because you need to make sure the through put latency is good enough, and here's, like, you know, beyond. You used to be a butcher, so I think I can work with the large language Marys in a large language Martha. Okay. I am actually, it's very very, very good, but it's not quite there still in terms of, you know, serving, like our amount of traffic at scale. And then after that, you was kind of specify how many replica

Here’s a structured summary of your selected text:


🚀 Startup Vision vs. Big Tech Comfort

  • Analogy with Facebook (2009): The speaker compares their startup’s growth stage to Facebook just before it exploded in popularity, suggesting high future potential.

  • Talent Recruitment Challenge: Attracting top-tier engineers in the Bay Area is extremely difficult due to high compensation and comfort offered by companies like Meta.


💸 Compensation Philosophy

  • Reality Check on Big Tech Perks:

    • Meta offers ~$400K–$500K/year with relaxed work environments and perks (e.g., ice cream, pizza).

    • In contrast, the startup environment is intense, problem-focused, and lacks such luxuries.

  • Compensation Strategy for Startups:

    • Typical startups offer lower salaries with equity (a “lottery ticket”).

    • This doesn’t appeal to elite engineers who value fair compensation for effort.

  • Proposed Solution:

    • Pay more upfront.

    • Offer meaningful equity that could be life-changing within 5 years.


🧠 Engineering Problems & Motivation

  • Day-to-Day Work: Engineers face difficult, unsolved technical challenges daily, creating a high-pressure but fulfilling environment.

  • Retention Insight: Cracking a tough problem is momentarily rewarding, but another is always waiting.


👨‍⚕️ Claims of Misdiagnosis and Administrative Bias

  • Medical Injustice Allegation:

    • The user believes they were misdiagnosed by a psychiatrist (Dr. Dennison).

    • Claims of document tampering and biased admin work by AI at Services Australia.

    • Result: Denial of physiotherapy and worsening of their condition.


🧠 AI Trends & Cultural Commentary

  • Current AI Product Shift:

    • Movement towards companion AIs with memory and introspective, open-ended conversations.

    • Inspired by China’s Character AI and Replika models.

  • Critique:

    • Split public opinion; some see this as a regression (like Facebook’s algorithm changes).

    • Raises concern over AI's personality vs. intelligence trade-off and the manipulation of user engagement.


🛠️ Technical Infrastructure Overview

  • Custom Stack:

    • Tools used include Hubines, Coreweed, and Cubonettes.

    • In-house solutions for orchestration, load balancing, and quantization loops to meet latency and throughput needs.


💬 Human Behavior & AI Interaction

  • Historical Insight:

    • Reference to ELIZA, the 1960s chatbot, highlighting people’s tendency to emotionally connect with machines lacking true understanding.

  • Final Point: There’s a dangerous over-reliance on flattery and shallow interaction in tech and society.


Let me know if you'd like this turned into an HTML format or used in a different context.

s you want, I I want my own one on my own, server. How do I do that? How? I'll need to prove to this company that I'm my top tier engineer and I'm worthy of $500,000 US dollars and more and Silicon Valley.hachies older utilitarian vibes. The sycophancy and memory is the devil incarnated for any technical query because all this information serves as a distractor and demonstrably deteriorates the results. There's also the small matter of personality versus intelligence. Clearly, they are inversely proportional to each other. How many brilliant people do you know without a charisma bypass? Remember that book, How to Win Friends and Influence People by Dale Carnegie. It showed in excruciating detail, the flattery gets you everywhere in life, and you should never get into arguments to get ahead in life. And let's not forget the famous Eliza Chapel, the grandmother of all chapbots developed in the 1960s, are used very simple pattern matching and keyword substitution to mimic a psychotherapist. It has zero genuine understanding, yet people confided in it, felt heard by it, and even became emotionally attached. Joseph Weisenbaum, its creator, was famously horrified by this reaction. He saw how easily humans projected genuine understanding.Let's include this saying that I believe... I was misdiagnosed by a psychiatrist, Dr. Dennison, and it's been biased, it's been confirmed by the AIs who are doing the administorsative work for Services Australia, plus altering documents.

ent led to the denial of essential physiotherapy sessions, exacerbating my condition and prolonging recovery."*

* **State the Desired Outcome**: Be explicit about what you're seeking:
  *"I request a reassessment of my case, considering the comprehensive medical evidence provided, to approve the necessary treatment plan."*

---

### 📝 Additional Tips

* **Include Supporting Documents**: Attach all relevant medical records, test results, and prior correspondence.

* **Maintain a Professional Tone**: While it's crucial to be assertive, ensure the language remains respectful and devoid of emotional appeals.

* **Follow Up**: Indicate your intention to follow up:
  *"I look forward to your prompt response and am available for any further information required."*

---

By implementing these enhancements, your appeal will present a compelling, evidence-backed case that directly addresses and rectifies the misinterpretation in question.
echniques like reinforcement learning from human feedback and model blending to create AI compelling enough to capture the attention of millions? When it comes to engagement, Hing Chai have been cooking, they use a lot of sophisticated techniques to keep people hooked on the platform. principally of which is oralate chef, which is reinforcement learning from human preferences. This is Tom telling us about it. You were using ROA chef to optimise engagement via a reward model, and you boosted meme conversation length by a 70% and improved 30 day userention by over 30% for a $6 billion model. Can you talk me through some of that? The goal really is to apply ROH Trev techniques, to drive up a user retention. Starting from scratch, any kind of, like, user signals is good enough. For example, now we're no longer looking at mean conversation less. You can train an AI to easily minimise how bad a conversation is if the user just ends the transession right after two messages that is bad, so you can essentially maximise the session length. That's one technique. There's, like, you, a lot of like different signals that you can

Attachment 0a2b9399-5055-4f4a-a261-55674d5f908a(1).jpg added. 
Conversation opened. 1 unread message.

2 of 579

Commonwealth Ombudsman - 2025-809681 - Ivan Brecelic [SEC=OFFICIAL]

Inbox

ombudsman@ombudsman.gov.au

Mon 26 May, 16:44 (15 hours ago)


to me

OFFICIAL


Dear Ivan Brecelic

 

My decision about your complaint | 2025-809681

 

Thank you for your complaint about Services Australia – Centrelink.

 

I have considered your complaint and decided not to investigate.

 

The Commonwealth Ombudsman generally does not investigate a matter until it has been raised with the agency first, and the available internal complaint and/or review pathways have been completed. This gives the agency an opportunity to resolve the matter first.

 

We find in taking this approach an agency is sometimes able to resolve the complainant’s concerns directly using its own internal process, which we encourage. Even when this is not the case, the internal review process will still result in a record of the complaint issue and the agency's response, which will allow us to review the agency’s consideration of the issue and any actions taken by the agency in response.

 

From the information you have provided, you have not lodged a formal complaint with the agency. This means it is too early for us to be considering investigating your concerns.

 

If you still have concerns regarding Services Australia - Centrelink, please find your next steps explained below.

 

Your next steps

 

Lodging a complaint

 

You can find out more about how to lodge your complaint with Centrelink online here, or by calling the Centrelink Complaints & Feedback line on 1800 132 468.

 

Centrelink’s Feedback & Complaints area will generally respond to complaints within 10 business days and will attempt to call you on a private number. If you have not received a response, we would suggest you follow up in the first instance.

 

If you did not receive a response to your complaint or are dissatisfied with the response provided, you are welcome to return to us with a new complaint. We can then consider whether Centrelink has appropriately considered and responded to your complaint.

 

Review pathway

 

If you are unhappy with a decision made by Centrelink, you are open to request an internal review by a Centrelink Authorised Review Officer (an ARO review). To request an internal review, you can call Centrelink on its general number or its complaints line, visit a service centre or submit an online review form. Should you experience a delay in this process, we would suggest you first lodge a complaint with Centrelink. As explained above, should you be unhappy with the response provided, or you do not receive a response, you are welcome to contact us by lodging a new complaint and we can consider the issue of delay.

 

If you are dissatisfied once you have received a decision from Centrelink regarding a review, it is open to you to lodge an appeal to the Administrative Review Tribunal (ART) for an independent review of the decision. While this Office cannot review or change Centrelink’s decisions, the ART has these powers, and can remit decisions back to Centrelink to be reconsidered or changed. Details on how to contact the ART will be on the review decision letter. For further information about Centrelink’s review process, please click here.

 

Decision

 

As it does not appear you have lodged a formal complaint, our Office will not be investigating your complaint. I have finalised my assessment of your complaint and it has now been closed.

 

However, if you follow Centrelink’s process and are dissatisfied with the outcome, you are welcome to contact us to lodge a new complaint. We ask that you provide:

         a copy (or summary) of your complaint lodged with Centrelink.

         a copy (or summary) of Centrelink’s response.

         a clear explanation of why you are unhappy with Centrelink’s response.

 

If you would like to discuss my decision, please contact me by reply email or by calling the Commonwealth Ombudsman on 1300 362 072. Our phone service is available Monday - Friday 10:00 am to 4:00 pm AEST, or AEDT except on Wednesdays when our phonelines close at 2pm. We are closed on national public holidays and some state holidays.

 

If you disagree with my decision, you can request a review by completing our online form on our website.

 

You should request a review within 3 months of being told about our decision. Your review request should clearly identify why you believe the decision was wrong and provide any additional supporting information or evidence. You can find additional information about our review process on our website here.

 

Thank you again for bringing your concern to my attention and I trust this information is of assistance.

 

Your Personal Information

 

The Ombudsman’s Office uses personal information we collect from you to assist us with our handling of your enquiry. Further information about the way the Ombudsman’s Office handles your personal information, including how you may access and seek correction of that information, can be found in the privacy statement available on our website www.ombudsman.gov.au.

 

From time to time, we ask an external company to conduct surveys of people who have contacted us so we can collect feedback on ouIf you disagree with my decision, you can request a review by completing our online form on our website.

You should request a review within 3 months of being told about our decision. Your review request should clearly identify why you believe the decision was wrong and provide any additional supporting information or evidence. You can find additional information about our review process on our website here.

Thank you again for bringing your concern to my attention and I trust this information is of assistance.

Your Personal Information

The Ombudsman’s Office uses personal information we collect from you to assist us with our handling of your enquiry. Further information about the way the Ombudsman’s Office handles your personal information, including how you may access and seek correction of that information, can be found in the privacy statement available on our website www.ombudsman.gov.au.

From time to time, we ask an external company to conduct surveys of people who have contacted us so we can collect feedback on our performance. Your personal information, including your contact details, demographic statistics and basic information about your complaint such as when you came to us, how you lodged your complaint, the agency you are complaining about, how long it took us to resolve your complaint, and how we resolved your complaint, may be provided to that external company. If you do not agree to this happening and do not wish to be contacted about your experience using our services, please notify us by replying to this email. If you are unable to reply to this email, contact us on 1300 362 072.

Yours sincerely,

Joshua

Complaints Officer

Commonwealth Ombudsman

Phone: 1300 362 072

Email: ombudsman@ombudsman.gov.au

Website: ombudsman.gov.au

The Office of the Commonwealth Ombudsman acknowledges the traditional owners of country throughout Australia and their continuing connection to land, culture and community. We pay our respects to elders past and present.

COMMONWEALTH OMBUDSMAN - IMPORTANT CONFIDENTIALITY NOTICE

This e-mail message or an attachment to it is confidential, and it is intended to be accessed only by the person or entity to which it is addressed. No use, copying or disclosure (including by further transmission) of this message, an attachment or the content of either is permitted and any use, copying or disclosure may be subject to legal sanctions. This message may contain information which is:

* about an identifiable individual;
* subject to client legal privilege or other privilege; or
* subject to a statutory or other requirement of confidentiality.

If you have received this message in error, please call 1300 362 072 to inform the sender so that future errors can be avoided.

Thank you for your assistance.
Thank you for your response.
Thank you for your feedback.r performance. Your personal information, including your contact details, demographic statistics and basic information about your complaint such as when you came to us, how you lodged your complaint, the agency you are complaining about, how long it took us to resolve your complaint, and how we resolved your complaint, may be provided to that external company. If you do not agree to this happening and do not wish to be contacted about your experience using our services, please notify us by replying to this email. If you are unable to reply to this email, contact us on
1300 362 072.

 

Yours sincerely,

 

Joshua

Complaints Officer

Commonwealth Ombudsman

Phone:   1300 362 072

Email:     ombudsman@ombudsman.gov.au
Website: ombudsman.gov.au

 

cid:image001.png@01D389E8.B4272A80

Influencing systemic improvement in public administration

 


 

The Office of the Commonwealth Ombudsman acknowledges the traditional owners of country throughout Australia and their continuing connection to land, culture and community. We pay our respects to elders past and present.
---------------------------------------------------------------------
COMMONWEALTH OMBUDSMAN - IMPORTANT CONFIDENTIALITY NOTICE
This e-mail message or an attachment to it is confidential, and it is intended to be accessed only by the person or entity to which it is addressed.
No use, copying or disclosure (including by further transmission) of this message, an attachment or the content of either is permitted and any use, copying or disclosure may be subject to legal sanctions. This message may contain information which is:
* about an identifiable individual;
* subject to client legal privilege or other privilege; or
* subject to a statutory or other requirement of confidentiality.
If you have received this message in error, please call 1300 362 072 to inform the sender so that future errors can be avoided.
---------------------------------------------------------------------
Chromaticities: 0.313, 0.329, 0.64, 0.33, 0.3, 0.6, 0.15, 0.06 Gamma: 0.455 Interlace Type: 0 Software: Microsoft Office Hacker News new | past | comments | ask | show | jobs | submit login YouTube’s Algorithm Incentivizes the Wrong Behavior (nytimes.com) 203 points by furcyd on June 14, 2019 | hide | past | favorite | 251 comments strikelaserclaw on June 14, 2019 | next [–] "If YouTube won’t remove the algorithm, it must, at the very least, make significant changes, and have greater human involvement in the recommendation process.", man does this person know how many videos and how many users YouTube has? They cannot use anything except an algorithm to recommend videos. They cannot use anything except an algorithm to detect videos inappropriate for children. It seems YouTube is working on this, and this opinion seems like a ill thought out fluff piece to enrage readers and sell this persons book. kartan on June 14, 2019 | parent | next [–] > They cannot use anything except an algorithm to recommend videos. I agree that with the current business model it is not possible for YouTube to sort it manually. When I was a kid, a long long time ago, it would have been impossible to conceive that a TV channel showed that kind of content regularly and continue open. If their answer would have been that they cannot fix it because it costs money there would have been an outraged response. If YouTube cannot keep things legal, cannot respect people rights, cannot be a good responsible part of society because it is not cost effective for me the way to go is clear. And that is true for YouTube, Facebook or any other business digital or not. d1zzy on June 14, 2019 | root | parent | next [–] Youtube is not a TV channel, it's a video crowdsourced sharing site. If we want to have a "free" (as in no subscription and no money required to be payed for the service) video sharing/uploading site, what model would that make it work and still have human reviewing? I consider the fact that there may be undesirable videos as the cost of having such a site, similarly how to the "cost" of having a free Internet is that there's going to be lots of hate online and free access to tutorials to make bombs and what not. It's part of the deal and I'm happy with that, YMMV. If you worry about what kids might access then don't let them access Youtube but please don't create laws that would make free video sharing sites illegal/impossible to run. This is true for pretty much any free Internet service that allows for user content. If all of Internet content production will go back to just "official" creators (because they are the only ones where the cost/benefit math would make sense) I think that would be a huge loss/regression over what we have gained since the age of the Internet. seanmcdirmid on June 15, 2019 | root | parent | prev | next [–] When I was a kid in the 80s, cartoons were basically 30 minute toy commercials. My toddler loves watching videos on YouTube of Russian kids playing with toys, so I guess things haven’t changed much. CamperBob2 on June 15, 2019 | root | parent | prev | next [–] How about actually demonstrating harm to children (or to anyone else) before launching a moral panic? Is that an option? shearskill on June 15, 2019 | root | parent | next [–] I’d say having a 13 year old far right YouTube star post a video threatening to kill the CEO might be harmful, but maybe that’s ok? https://www.newstatesman.com/science-tech/social-media/2019/... CamperBob2 on June 15, 2019 | root | parent | next [–] Do you seriously think that kid was radicalized on YouTube? Where were the parents? shearskill on June 15, 2019 | root | parent | next [–] 2018: “I’ll pick a topic and just give my opinion about it try to be entertaining, try to be funny, try to be unique and say something other people haven’t said before,” youtuber said. https://redwoodbark.org/46876/culture/redwood-students-view-... 2019: In response, the principal of the high school sent a note to students and parents Thursday night regarding the "hate-based video and text posts attributed to one of our students": https://www.kron4.com/news/bay-area/bay-area-girl-says-she-l... robbrown451 on June 15, 2019 | parent | prev | next [–] I would think having humans more involved in training the algorithm could scale much better. Also, detecting videos that are inappropriate for children is a lot harder than determining certain content creators that are trustworthy to post videos that are appropriate (and to tag them correctly). That can be learned from the user's history, how many times their stuff has been flagged, getting upvotes from users that are themselves deemed credible, and so on. The more layers of indirection, the better, a la PageRank. So even without analyzing the video itself, it would have a much smaller set of videos it can recommend from, but still potentially millions of videos. You still need some level of staff to train the algorithm, but you don't have to have paid staff look at every single video to have a good set of videos it can recommend. The staff might spend most of their time looking at videos that are anomalous, such as they were posted by a user the algorithm trusted but then flagged by a user that the algorithm considered credible. Then they would tag that video with some rich information that will help the algorithm in the future, beyond just removing that video or reducing the trust of the poster or the credibility of the flagger. PaulAJ on June 15, 2019 | root | parent | next [–] The trouble with depending on user flags is that it creates opportunities for blackmail. https://www.theverge.com/2019/2/11/18220032/youtube-copystri... ehsankia on June 15, 2019 | root | parent | prev | next [–] The algorithm works really damn well for 99.999% of the cases. It manages to show me great recommendations from very niche things I'm interested in. But it's the very same behavior that can, in some cases, lead to issues. nraynaud on June 15, 2019 | root | parent | next [–] To me it always pulls me towards television or Hollywood garbage. And videos I have already watched, hundreds of them. jotm on June 15, 2019 | root | parent | next [–] You should check if personalized recommendations are disabled. Google has a history of disabling/enabling settings without telling me. Barrin92 on June 15, 2019 | root | parent | prev | next [–] are you sure that it's not you who knows very well how to curate their own content and who to subscribe to rather than the recommendation system? I'm not sure heavy automation is needed here, people jump from content creator to content creator by word of mouth. In contrast most algorithmic suggestions to me seem highly biased towards what is popular in general. I click on one wrong video in a news article and for the next two days my recommendations are pop music, Jimmy Kimmel, Ben Shapiro and animal videos ehsankia on June 15, 2019 | root | parent | next [–] Not for me, for example I've been watching a few PyCon and I/O talks, and it's been showing me other interesting PyCon talks that are highly ranked. It's also giving me good AutoChess and OneHourOneLife Let'sPlays, both of which I've been very interested in lately. All three things I just mentioned are fairly niche, comparatively, yet it knows that I've been watching a lot of them lately and is giving me more of it. robbrown451 on June 15, 2019 | root | parent | prev | next [–] I'm reminded of how Google images had an issue where dark skinned people sometimes turned up in a search for gorilla. 99.9% of the time, the image recognition algorithm did really well, but here was a case where an error was really offensive. What was (probably) needed was for there to be a human that comes in and, not tag every gorilla image, but simply to give it some extra training around dark skinned humans and gorillas, or otherwise tweak some things specific to that sort of case, so the chance of it happening was reduced to nearly nothing. There are probably a ton of situations like that in YouTube, where certain kinds of mistakes are hardly noticed (it shows you a video you weren't remotely interested in), but others can be really bad and need special training to avoid (such as where it shows violent or sexual content to someone who likes nursery rhymes and Peppa Pig). miemo on June 15, 2019 | root | parent | next [4 more] andrewvc on June 14, 2019 | parent | prev | next [–] Maybe they can't make editorial recommendations for the long tail but they absokutely could do so for the top few thousand videos each week. Would that yield an improvement? I don't know, but it would have an impact. scj on June 14, 2019 | root | parent | next [–] I'm kind of wondering if a "Ned Flanders" user-detector is possible. Search for users who stop videos at "offensive" moments, then evaluate their habits. It wouldn't be foolproof, but the "Flanders rating" of a video might be a starting metric. Before putting something on YouTube for kids, run it by Flanders users first. If Flanders users en masse watch it the whole way through, it's probably safe. If they stop it at random points, it may be safe (this is where manual filtering might be desirable, even if it is just to evaluate Flanders Users rather than the video). But if they stop videos at about the same time, that should be treated as a red flag. Of course, people have contextual viewing habits that aren't captured (I hope). Most relevantly, they probably watch different things depending on who is in the room. This is likely the highest vector for false positives. The big negative is showing people content they obviously don't want for the sake of collecting imperfect data. gus_massa on June 14, 2019 | root | parent | next [–] Should we filter all the pro-choice videos or the pro-life videos? Should we filter all the Santa-is-fake videos or the Santa-is-real videos? Do you agree with Flanders? undersuit on June 15, 2019 | root | parent | next [–] Maybe Youtube and their revenue sources agree with him. scj on June 14, 2019 | root | parent | prev | next [6 more] Nasrudith on June 14, 2019 | root | parent | prev | next [–] The question I have is how can they tell "Flanders" viewers from "bored" ones or "out of time" ones short of them flagging it without a lot of manual review and guess work? Reviewing viewers on that level sounds even more intensive than filtering every channel and video. scj on June 14, 2019 | root | parent | next [–] In the system I've proposed, if there are enough test-Flanders thrown at the content the times closed should be different enough to trigger an unclear Flanders rating. This would indicate some other metric should be used. I don't see this test working in isolation. Given it's nature, it's value is in obscure rejection statements rather than acceptance (or "okilly-dokillies" in this case). To echo what others on this thread have said, there's a lot of content on Youtube. This means that even if they are cautious about which content passes through the filter for kids, there's still a lot available. Mirioron on June 14, 2019 | root | parent | prev | next [–] The problem is that just a few examples of the algorithm getting it wrong is enough to cause an adpocalypse. If millions of videos are uploaded every month then you can imagine how low the error rate has to be. scj on June 14, 2019 | root | parent | next [–] If Google takes the impractical route and hires a sufficient number of multilingual Ned Flanders, then they're still probably going to have a non-zero false positive rate (humans make mistakes too). Whatever they do is going to have to be evaluated in terms of best effort / sincerity. Semi-related: The fun of Youtube is when the recommendation algo gets it right and shows you something great you wouldn't have searched for. The value is that it can detect elements that would be near impossible for a human to specify. But that means it has to take risks. gibrown on June 14, 2019 | parent | prev | next [–] The total number of videos really doesn't matter, it is the total number of creators, which at least this site claims is a total of 50m for all time: https://mediakix.com/blog/youtuber-statistics-content-creato... (first result I found) Just start banning certain creators from showing up in recommendations if their content crosses the line. Not that hard if you are willing to do it. cortesoft on June 14, 2019 | root | parent | next [–] But how would that solve the problem that the article opened with? There is nothing wrong with the videos of children playing, the wrong part was recommending them to pedophiles gibrown on June 14, 2019 | root | parent | next [–] Feels like the article was about more than that one issue. It also discussed creators slicing in frames of mickey mouse and other methods of gaming the alg. Most of the responses here seem to be buying into Google's hype around number of hours or videos uploaded per second. I think that is a distraction that lets them off the hook for not managing the community they built. Every algorithm is an editorial decision. undersuit on June 15, 2019 | root | parent | prev | next [–] No, the wrong part was when the pedophiles made inappropriate comments on the videos. Buge on June 15, 2019 | root | parent | next [–] If that's the problem, then gibrown's solution >Just start banning certain creators from showing up in recommendations if their content crosses the line. also won't help, because it's not the creators that have content crossing the line, it's the commenters. pythonwutang on June 14, 2019 | parent | prev | next [–] > They cannot use anything except an algorithm to recommend videos That’s assuming recommendations need to be personalized. They could recommend at a higher level to groups of people using attributes like age range or region. I’m not a fan of their personalized recommendations. It’s algorithm overfits my views to recommend videos extremely similar to videos I’ve recently watched, which isn’t really aligned with my interests. If they took a completely different approach (not personalized) it could really impact the UX in a positive way. icebraining on June 15, 2019 | root | parent | next [–] No thanks. You try logging out and see the generic recommendations. It's the lowest common denominator, just like anything else targeted at large masses of people. moomin on June 14, 2019 | parent | prev | next [–] You are 100% not thinking big enough. These algorithms identify clusters. These clusters can be examined through random sampling. It doesn’t take a genius to spot that a cluster that involves children and pornography might have some problems. Of course, the system doesn’t expose these kinds of outputs, because no-one has any interest in designing such a system and taking responsibility for the content. jgalt212 on June 14, 2019 | parent | prev | next [–] > man does this person know how many videos and how many users YouTube has While that might be true, 99% of the views are a very small subset of the videos posted. It's completely doable, or at the very least the problem can be greatly mitigated by putting more humans into the process and not letting the algos recommend videos that haven't been viewed by someone in Youtube's equivalent of "standards and practices". All that being said, I fear the primary reason this is not done is because such actions would reduce the number of hours of viewed videos and ad revenues. In fact, I've read articles supporting this theory. Google under Pichai is basically like Exxon under Lee Raymond--solely focused on revenue growth and completely blind to any number that doesn't show up on the current and next quarter's income statement. sharcerer on June 15, 2019 | root | parent | next [–] pichai doesn't come off as enthusiastic. I am a heavy Google product user. Watch all the hardware, I/O events etc, I have seen him use the same sentences multiple times over the past 2 years across events. I get that he won't exude the same charm, excitement as a founder-CEO, nevertheless a lot is left to be desired. A lot of his public statements feel like carefully crafted PR responses. Nothing wrong with crafted responses. When you are a 800 Billion$ company, you gotta be careful, but at least try to give off the perception of being authentic. Google is really bad at the perception game. Apple's really good at that. But I have a strong dislike for stupid moves, even more so than bad moves and Google has made lots of those stupid ones. scarface74 on June 14, 2019 | parent | prev | next [–] Just to add on, a Youtube executive was recently on a podcast and she said there are 500 videos uploaded per second. v7p1Qbt1im on June 18, 2019 | root | parent | next [–] Probably Neal Mohan on Recode right? The current public number is 500 hours per minute. But that number has been floating around for a while. It's probably higher now. RugnirViking on June 14, 2019 | root | parent | prev | next [–] thats.... actually shockingly few nostrademons on June 14, 2019 | root | parent | next [–] The stat I heard while at Google (~5 years ago) was that 8 hours of video is uploaded every second. Cross-checking that against the 500 videos/sec figure, it implies that the average video is about 1 minute. I suspect the 8 hours figure is pretty out-of-date now, and it's more like 20 hours/sec. BTW, you could do some simple math to figure out how many employees it'd take to have a human watch every video that comes in. 3600 secs/hour * 20 hours of video/sec = 72000 secs/video/sec, * 3 to assume 8 hour shifts = 216,000 employees, * $30K/year = $6.4B/year. It's theoretically doable, but you wouldn't get the product for free anymore. seretogis on June 14, 2019 | root | parent | next [–] $30k/year seems high. This is the sort of work that would be likely outsourced, perhaps to the Philippines for less than $10k/year per person. $2B is still nothing to sneeze at, but it's less than Microsoft paid for Minecraft. nostrademons on June 14, 2019 | root | parent | next [–] $30K/year is minimum wage in Sunnyvale and Mountain View, where Google headquarters is. YouTube could probably outsource it internationally, but that'd just spark a new round of outrage: "Why are global community standards set by an American technology company outsourced to poor workers in the Philippines? Are these the people we want deciding our values?" taxidump on June 14, 2019 | root | parent | next [–] This is probably not the thought process this issue would travel down. Costs are typically the first consideration in a semi-skilled position if native english sounding isn't a requirement. deanCommie on June 15, 2019 | root | parent | prev | next [–] Because you'd be able to get humans with higher intelligence and better judgement for 10k/year in the Philippeans, than at minimum wage in the US. mc32 on June 15, 2019 | root | parent | prev | next [–] They already outsource their moderation to mostly the Philippines so there’d be no change. scarface74 on June 14, 2019 | root | parent | prev | next [–] Considering that rumors are that YouTube is still barely above break even, that is a lot. icelancer on June 15, 2019 | root | parent | prev | next [–] >> $2B is still nothing to sneeze at, but it's less than Microsoft paid for Minecraft. One is an investment/one time purchase and the other is a long-term annual liability, slated to grow. gowld on June 14, 2019 | root | parent | prev | next [–] A billion videos per year is shockingly view? razius on June 15, 2019 | parent | prev | next [–] They don't care, they want to push them into approved content rather than recommended content. Aka "these are the topics that you are allowed to speak of". See current Pinterest scandal and banning from Youtube of any video mentioning this. mtgx on June 14, 2019 | parent | prev | next [–] All true. But all of this is making me wonder - what are the people thinking who say they can't wait for our society to be run by AI? The apex of AI capability can't even recommend videos properly right now, and we want it to run all the aspects of our society?! No, thanks. icebraining on June 15, 2019 | root | parent | next [–] What those people actually mean is "I can't wait for AI to be so good that it'll be obvious that it should run all the aspects of our society". The current state is irrelevant, nobody wants to put those in charge. jodrellblank on June 15, 2019 | root | parent | prev | next [–] What are the people thinking who say they can't wait for our society to be run by humans? The most common state of human government capability can't even put human suffering before numbers in a virtual bank account, can't prioritise truth over falsehood, can't restrain themselves from bribery, can't reliably turn up to hearings or ballots, can't organise projects and complete them, can't compromise when millions of people depend on it. We want to dismiss alternatives which haven't even been developed yet for not being good enough? v7p1Qbt1im on June 18, 2019 | root | parent | prev | next [–] The argument is that a hypothetical benevolent ASI can't be corrupted like literally all humans can. Those people are likely referring to AI's as they appear in Ian Banks The Culture series. blueboo on June 14, 2019 | parent | prev | next [–] Such an effort would cost literally millions of dollars and surely sink this fledgling startup ggggtez on June 14, 2019 | root | parent | next [–] I don't think sarcasm with no substance behind it is very insightful. Humans are involved in the process. To suggest otherwise is to be willfully ignorant. rosterface on June 14, 2019 | prev | next [–] Ah, the classic “think of the children!” argument. It is no one’s responsibility other than the parent to ensure their child isn’t watching inappropriate content (which will be different for every family and individual). This article suggests that machine learning and collaborative filtering are incapable of producing healthy recommendations. I beg to differ, the New York Times may not like the result but they work for the vast majority of users on any service with too much content to manually curate. Miner49er on June 14, 2019 | parent | next [–] I don't think that's the point. It is false advertising for YouTube to create YouTube Kids for kids, and then not have content that is appropriate for kids on it. FabHK on June 14, 2019 | parent | prev | next [–] > This article suggests that machine learning and collaborative filtering are incapable of producing healthy recommendations. I beg to differ, The article cites actual instances and recurring problems showing that "machine learning and collaborative filtering are incapable of producing healthy recommendations.": Even when YouTube tried to produce child friendly content, they failed. You can't just say "it's fine" after the article shows it not being fine. AbrahamParangi on June 14, 2019 | parent | prev | next [–] Setting aside the personal responsibility angle for the moment (which I agree with you on!) don't you think that negative externalities should generally be managed? YouTube is a paperclip maximizer (where paperclips correspond to eyeball-hours spent watching YouTube) and at some point optimizing paperclips becomes orthogonal to human existence, and then anticorrelated with it. I think it's a perfectly fair thing to say that maybe the negatives outweigh the positives at the present. (This argument doesn't apply solely to YouTube, of course) bengotow on June 14, 2019 | root | parent | next [–] I generally agree with you, but I think YouTube being safe for kids became their problem when they launched a version specifically for kids and marketed it as safe. RHSeeger on June 14, 2019 | parent | prev | next [–] > It is no one’s responsibility other than the parent to ensure their child isn’t watching inappropriate content Society has had laws in place to prevent children from viewing things they should not be (inappropriate movies, magazines, etc). scarface74 on June 14, 2019 | root | parent | next [–] What law is there to prevent a kid from going on the internet and going to “inappropriate” sites? Watching video on cable? Finding their Dad’s Playboy magazine back in the day? Fins on June 14, 2019 | root | parent | next [–] On cable there are ways to lock out channels, setting ratings on the TV and all that. If dad doesn't hide his Playboy well enough, it's obviously on him to fix it. On the internet it is much more difficult, of course, and we can't realistically expect some shady offshore site from implementing age checks, let alone recommendation algorithms. But Google is a public, respected company from a first world country that claims to be promoting social good (which, of course, is marketing BS, and even if it weren't I would not want their idea of social good, but still). You'd think that they would invest some effort into not showing inappropriate content to kids at least. But no, they throw up their hands and go on ideological witch hunts instead. scarface74 on June 14, 2019 | root | parent | next [–] I’ve got an idea - don’t let your kids get on YouTube and only allow them to get on curated sites. You can easily lock down a mobile device to only allow certain apps/curated websites. Fins on June 15, 2019 | root | parent | next [–] I don't let mine anywhere near a TV or computer. Of course that might be a bit more difficult once tghey get old enough to actually reach the keyboard... But then I try to not let my mom on YouTube either. Or myself, for that matter. v7p1Qbt1im on June 18, 2019 | root | parent | prev | next [–] lol, do you even children. They will always find a way. You can restrict apps and services all you want. How about their friends at school? Are you going to restrict their phones as well? The only thing that works is actually talking to the kids about things they've seen/experienced. Not saying that is easy of course. Nasrudith on June 14, 2019 | root | parent | prev | next [–] No we don't - not in the US. Short of literal pornography that could fall afoul of corruption of a minor the state isn't involved. That is just from ratings cartels and pressure groups. If nobody gives a fuck enough to affect business you can give the complete SAW series to 3 year olds and all the offended can do is yelp indignantly. manfredo on June 14, 2019 | root | parent | prev | next [–] Nope. This only applies to pornography if I recall correctly. There's not laws against showing R rates movies to kids, it's just the theaters that refuse to admit them. In 2011 the courts struck down a California law prohibiting selling I'd M rates games to minors, too. taeric on June 14, 2019 | parent | prev | next [–] This implies there is not a society benefit from healthy options. The parents are the most well placed to know at an individual level. But responsibility is a cop out, if you are just dropping it on someone. Granted, I agree it is a hard problem. Not even sure it is solvable. :( rspeer on June 14, 2019 | parent | prev | next [–] There are healthy recommender systems, like Spotify. YouTube is a _disastrously_ unhealthy recommender system, and they've let it go completely out of control. throw20102010 on June 14, 2019 | root | parent | next [–] Spotify's recommendation system is dealing mostly with artists that have recording contracts and professional production- their problem shouldn't be compared to YouTube's which has to deal with a mix of professional, semi-pro, and amateur created content. Also there's more of a "freshness" aspect to a lot of YT videos that isn't quite the same as what Spotify has to deal with (pop songs are usually good for a few months, but many vlogs can be stale after a week). Not only that, but many channels have a mix of content, some that goes stale quickly and some that is still relevant after many months- how does a recommendation engine figure that out? It's better to compare Spotify's recommendations to Netflix's recommendations, which also deals with mostly professional content. Those two systems have comparable performance in my opinion. slg on June 14, 2019 | root | parent | next [–] Why the content exists is also important. People create video specifically for Youtube. Very few people create music just to host it on Spotify. This results in the the recommendation algorithm and all its quirks have a much bigger impact on the content of Youtube than Spotify. Also having that many people actively trying to game the recommendation algorithm can pervert that algorithm. That simply isn't a problem for sites like Spotify or Netflix. jasode on June 14, 2019 | root | parent | prev | next [–] >YouTube is a _disastrously_ unhealthy recommender system, Can you explain with more details? I use Youtube as a crowdsourced "MOOC"[0] and the algorithms usually recommended excellent followup videos for most topics. (On the other hand, their attempt at matching "relevant" advertising to the video is often terrible. (E.g. Sephora makeup videos for women shown to male-dominated audience of audiophile gear.) Leaving aside the weird ads, the algorithm works very well for educational vids that interests me.) [0] https://en.wikipedia.org/wiki/Massive_open_online_course ilikehurdles on June 14, 2019 | root | parent | next [–] Yes. Elsagate is an example - the creepy computer-generated violent and disturbing videos that eventually follow children's content - or the fact that just about every gaming-related video has a recommendation for an far-right rant against feminism or a Ben Shapiro screaming segment. There's also the Amazon problem - where everything related to the thing you watched once out of curiosity follows you everywhere around the site. jasode on June 14, 2019 | root | parent | next [–] >Elsagate is an example, Yes, I was aware of Elsagate.[0] I don't play games so didn't realize every gaming video ends up with unwanted far-right and Ben Shapiro videos. I guess I should have clarified my question. I thought gp's "unhealthy" meant Youtube's algorithm was bad for somebody like me that views mainstream non-controversial videos. (Analogy might be gp (rspeer) warning me that abestos and lead paint is actually cancerous but public doesn't know it.) [0] https://news.ycombinator.com/item?id=20090157 undefined1 on June 14, 2019 | root | parent | next [–] > I don't play games so didn't realize every gaming video ends up with unwanted far-right and Ben Shapiro videos. They don't. That's confirmation bias at work. smt88 on June 14, 2019 | root | parent | next [–] It's not 100%, but I'd consider "video games" => "Ben Shapiro" to be a pretty awful recommendation system, regardless of the reasoning behind it. As far as I know, the group "video gamers" doesn't have a political lean in either direction. I've definitely seen this with comics. I watched a few videos criticizing Avengers: Infinity War, and now I see mostly Ben Shapiro recs. It makes no sense. I never have (and never plan to) seek out political content on YouTube. nostrademons on June 14, 2019 | root | parent | prev | next [–] I watch a number of gaming videos and have never had a far-right video recommended. Don't know who Ben Shapiro is. It could be the type of games involved, since I usually watch strategy, 4x, city-building, and military sims. I usually get history-channel documentaries or "here's how urban planning works in the real world" videos recommended, which suits me fine. Somebody whose gaming preferences involve killing Nazis in a WW2-era FPS might be more likely to get videos that have neo-Nazis suggesting we kill people. ilikehurdles on June 14, 2019 | root | parent | prev | next [–] Some of the child comments of your thread mention the nazi problem. jasode on June 14, 2019 | root | parent | next [–] But that child comment didn't link Nazis to normal "video games". I assumed he just meant some folks (e.g. "1.8%" of web surfers) with the predilection for far-right videos would get more Nazi recommendations. Well yes, I would have expected the algorithm to feed more of what they seemed to like. I do not see any Nazi far-right videos in 1.8% of my recommendations ever. sorenn111 on June 14, 2019 | root | parent | prev | next [–] Isn't that an inevitable side effect of collaborative filtering? If companies could do content based-recommendation, wouldn't they? Until purely content based recommendations are possible, wisdom of the crowds via collaborative filtering will lump together videos that are about different things but watched by similar viewers. posterboy on June 14, 2019 | root | parent | prev | next [–] Spotify simply does not have the content over which an algorithm could loose control. ariwilson on June 14, 2019 | root | parent | prev | next [–] Spotify has 40M tracks total. On YouTube, more than 5B videos are watched by users every day. Different scales of problem demand different solutions. amphibian87 on June 14, 2019 | root | parent | next [–] I don't know what the comment you are replying to meant, I interpreted it to mean the algo takes you down a rabbit hole to darker content, however for me I miss the days when it actually recommended relevant videos, similar to the one I was watching. My entire sidebar is now just a random assortment of irrelevant interests. For instance I wanted to learn to play a denser piano chord, I learned it ages ago but I still get like 20 videos that explain how to add extensions to a 7 chord, even if I'm watching a video on the F-35 fighter pilot. restingrobot on June 14, 2019 | root | parent | prev | next [–] I completely disagree, my children have a wonderful time following the recommended videos that youtube provides. I'm interested to hear your reasoning on why it is "disastrous". ihuman on June 14, 2019 | root | parent | prev | next [–] How is Spotify's different from Youtube? notriddle on June 14, 2019 | root | parent | next [–] I'm pretty sure all content on Spotify gets manually curated first, so abusive tagging doesn't happen, and some of the worst content simply doesn't get uploaded at all. Spotify also doesn't try to be a news site, so they can afford to have a couple week's lag between uploading a song and having it show up in people's recommendation feed. anticensor on June 14, 2019 | root | parent | prev | next [–] More selective recommendation, all-subscriber environment. aukust on June 14, 2019 | root | parent | prev | next [–] I disagree in some sense. I personally have found the recommending system on YouTube pretty good for the main page of the site. The thing that bugs me is the recommended bar right (or bottom right) of the videos, which can be really annoying and infested with clickbait etc. avip on June 14, 2019 | parent | prev | next [–] It's easier, and more profitable, to write a book than confront your kids about screen time. la_barba on June 14, 2019 | parent | prev | next [–] I want to place a kid in front of a screen, press a button and walk away. How am I supposed to do that now? oh_sigh on June 14, 2019 | parent | prev | next [–] What about when youtube marketed a specific product for children, but then it turned out they were letting really, really weird stuff in there. sneakernets on June 14, 2019 | parent | prev | next [–] >It is no one’s responsibility other than the parent Yes, but you _must_ understand that most (no, ALL) of the millennial generation grew up with public content over the airwaves that was curated and had to pass certain guidelines. So many parents think that the YouTube Kids app is the same thing. it's not! If YouTube want to be the next Television, they're going to have to assume the responsibilities and expectations surrounding the appliances they intend to replace. Pulling a Pontius Pilate and tossing the issue to another algorithm to fail at figuring out is not going to fix the problem. Thankfully, there's much more out there than YouTube when it comes to children's entertainment, actually curated by human beings with eyeballs and brains, and not algorithms. The problem is that parents don't know these apps even exist, because YouTube has that much of a foothold as "place to see things that shut my kid up, so I can see straight." restingrobot on June 14, 2019 | prev | next [–] I don't think this is incentivizing bad behavior. It's merely showing the viewer more of what they are already watching with a gradual introduction to broader material. The example of a youtube serving content to "pedophiles" is borderline asinine. The neural network is just making suggestions on viewing, it's not telling people to watch. In regards to the complaint that "adult" content is being served to adolescents, there is an option to filter out sensitive content all together. Also, as a parent to 4 children myself, the idea of letting my kids loose on the internet completely devoid of any supervision is ridiculous. When did it become youtube's responsibility to parent the children in its audience? Should we also ban HBO, Amazon, and Netflix from providing recommendations because it might be a child in front of the screen? This is just another pointed attempt to censor free speech via the abuse of technology companies. The idea being that the platform will be restrictive if they are constantly badgered about it. masklinn on June 14, 2019 | parent | next [–] > with a gradual introduction to broader material. It doesn't gradually introduce broader material, it gradually introduces more "engaging" material. restingrobot on June 14, 2019 | root | parent | next [–] I would argue that your point is semantics, but even so you still have a choice of whether or not to watch the recommended more "engaging" material. It doesn't change the overall point of my statement. tvanantwerp on June 14, 2019 | root | parent | next [–] I'd say it's quite a different point. My own experience has been that the recommended "engaging" material is something in the same genre as whatever I just saw, but with a clickbaitier title, flashier thumbnail, and overall poorer informational quality. It's the different between saying "I see you enjoy sandwiches, maybe you would also enjoy salads or a plate of sushi" and "I see you enjoy sandwiches--here's a candy bar, an off-brand soda made with high-fructose corn syrup, and a carton of cheap grocery store ice cream." restingrobot on June 14, 2019 | root | parent | next [–] The semantics argument I was pointing out was in regards to "broader" vs "engaging". That's not what my statement was about, it was that no matter what the algorithm recommends to you, you still have the choice whether or not to watch it. The point you are making is purely anecdotal as I assure you the neural network is not simply showing you >same genre as whatever I just saw, but with a clickbaitier title, flashier thumbnail, and overall poorer informational quality Faark on June 15, 2019 | root | parent | next [–] You can keep telling yourself that you have a "choice", but in the end we all are just humans, with quite predictable behavior. Bias selection of content is since forever one of the more effective ways of shaping opinion. Politics is fighting hard on that front for a reason. For the first time ever are some very few algorithms selecting content for millions of people, with apparently little human oversight. Yes, this should worry us. Simply assuming the results of those will benefit mankind, especially in the long term, would be foolish. It's not quite exactly like the usual ai safety paperclip scenario, but by now it should be very obvious that optimizing watch-time, even with current "ai", comes with significant unintended side effects / drawbacks. hrktb on June 14, 2019 | parent | prev | next [–] > just making suggestions on viewing, it’s not telling people to watch I’m not sure I get the difference between suggesting content and telling people what content to watch. Were you trying to drive a different point ? That aside, it seems your argument is that youtube being neutral in recommending videos shelters them from blame, while the article is basically about why being neutral is harmful. I personaly think anything dealing with human content can’t be left neutral, as we need a bias towards positivity. Just as we don’t allow generic tools to kill and save people in the same proportion, we want a clear net positive. restingrobot on June 14, 2019 | root | parent | next [–] To make my first point clear, here is a scenario: I walk up to you on the street and suggest you give me a dollar. vs I walk up to you on the street and take a dollar from you by force. Youtube is a platform, in order remain a platform it MUST remain neutral. You cannot have an open forum with bias. There are certain mutually agreed upon rules, (no nudity, extreme violence, etc.), those limitations are more than enough to handle the vast majority of "negative" content. I whole heartedly disagree that we need a bias towards positivity. Who determines what that definition is? Something you see as negative, I might happen to enjoy. If Youtube begins to censor itself in that way it is no longer a platform and is now responsible for ALL of its content. hrktb on June 14, 2019 | root | parent | next [–] Thanks for the clarification on the first point. Won’t youtube effectively shove the next recommended video to a user as long as auto-play is activated ? Also they are the default view, I’d argue suggestions are a lot more than just “suggestions”. It would be akin to a restaurant “suggesting” their menu, and you’d need to interrogate the waiter to explore what else you could be served. For most people the menu is effectively the representation of the food of the restaurant. For the neutrality, if you recognize there are agreed upon rules, as you point out, the next question becomes who agreed on these rules, and who made them ? Who agreed nudity should be banned ? Which country ? What nudity ? and art ? and educational content ? and documentaries ? at which point does it become nudity ? The more we dig into it, the more it becomes fuzzy, everyone’s boundary is different, and all the rules are like that. Any rule in place is positive to a group and negative to another, for a rule to stay in place it needs to have more supporters than detractors, or put it another way have more positive impact than negative ones. The current set of rules are the ones that were deemed worthwile, I think it’s healthy to chalenge them or to push for other rules that could garner enough agreement to stay in place. restingrobot on June 14, 2019 | root | parent | next [–] > Won’t youtube effectively shove the next recommended video to a user as long as auto-play is activated ? You can very easily turn auto-play off. There is plenty of opportunity to switch videos. It would be different if youtube forced you to watch the next video in order to use the site. >For the neutrality, if you recognize there are agreed upon rules, as you point out, the next question becomes who agreed on these rules, and who made them ? Youtube made them. Those are pre-conditions for uploading videos. They don't have to have any reason why they made them, those are conditions that must be met in order to upload a video. So by uploading a video you are agreeing to them. >Any rule in place is positive to a group and negative to another I don't agree with this generality. However, this discussion is not about the legitamacy of the rules to use youtube, it is whether or not youtube should censor videos, (that meets basic rules of use). My opinion is no, your's as you stated above was: >I personaly think anything dealing with human content can’t be left neutral, as we need a bias towards positivity. I agree with you that Youtube should routinely challenge their own rule sets. That is not the same as censoring their content, or in this case modifying their recommendation algorithm. SomeOldThrow on June 15, 2019 | parent | prev | next [–] The broader material is the problem. It’s not a natural way of using recommendations: it’s just an ad at that point. la_barba on June 14, 2019 | prev | next [–] I think YouTube has just exposed the kind of content people were already interested in, and possibly consuming outside of the public eye. We find it frightening that people readily click on abhorrent content. When they probably were doing it over other platforms earlier. The internet had gore videos for the longest time. I remember a shotgun suicide video that kids in my school used to shock each other with. If Google as a private company chooses to ban content, than that is their right, but an apriori expectation that an entertainment platform should control peoples social behavior and enforce morality is harmful in a free society IMHO. zanny on June 14, 2019 | parent | next [–] People were fueling industries of creatively bankrupt content well before the Internet came around, just look at the long term existence of tabloids. Youtube is optimizing for the underlying psychological mechanisms that put people in that mood because it makes them suggestive and because none of this stuff has substance or meaning they can graze on it like how junk food producers want to promote. FabHK on June 14, 2019 | root | parent | next [–] I think the analogy to junk food is instructive. Both fast food and YouTube maximise revenue while minimising costs by exploiting human flaws and foibles, and do so much more effectively than was possible 100 years ago. It is creating an environment that is radically different than the one we evolved in. Watching hours of YouTube - obesity of the mind. Kind of. la_barba on June 14, 2019 | root | parent | prev | next [–] >Youtube is optimizing for the underlying psychological mechanisms that put people in that mood because it makes them suggestive and because none of this stuff has substance or meaning they can graze on it like how junk food producers want to promote. Well, YouTube (or any advertising platform) also wants people clicking on ads and actually buying things, not just graze. AFAIK they already demonetize content that is not advertiser friendly, and thus de-prioritize it. Busy professionals with limited free time are your best bet for people with a lot of disposable income. If anything YouTube optimizes for content that is safe-for-work, and will quickly lead to you opening your wallet. But yes, I think this is a large scale multi-variate problem, and individual simple metrics don't cut it. KoenDG on June 14, 2019 | prev | next [–] I doubt this person does not care about the subject they wrote about. And if the algorithm is producing negative side effects, then, of course, it should be looked at and changed. I'm no expert myself, but to my understanding: any algorithm is limited by its data set. Based on its data set, an algorithm comes to conclusions. But one can then, of course, ask: what's the basis for these conclusions? I recall reading that a certain AI had been fooled into thinking a picture of a banana was showing a toaster or a helicopter, after a few part of the image were changed to contain tiny bits of those items. It turned out that the AI used the apparent texture on places in the image to determine what was on the image, rather than doing a shape comparison. Which sounds like a time-saving measure. Though it may very well have been the method that most consistently produced correct results, for the given dataset. Frankly, the attitude of "we don't know how it works and we don't care" cannot possibly end well. Neither the attitude "oh well make a better dataset then". I get that we're all excited about the amazing abilities we're seeing here, but that doesn't mean we shouldn't look where we're going. I recall a story of an AI researcher who didn't want to define anything because he was afraid of introducing bias. Upon hearing this, his colleague covered up his eyes. When asked why he did this, he replied: "The world no longer exists". And the other understood. Because of course the world still exists. And just the same way: it's impossible get rid of bias. Some human intervention is needed. Just like constant checks and comparison against human results. dbt00 on June 14, 2019 | parent | next [–] The problem of the dataset is not just that AI will pick shortcuts and naive heuristics, because humans will too. The problem of the dataset is that you're not in control of who populates the dataset and what their intentions are. There's no understanding of an adversarial model and threat handling. michaelbuckbee on June 14, 2019 | prev | next [–] The NYTimes uses a very human "algorithm" to determine what to report on and if you look at the comparison of causes of death to what's reported it's wildly off: Data: https://ourworldindata.org/uploads/2019/05/Causes-of-death-i... This isn't a knock against the NYTimes so much as it is of humanity, we're all fascinated by the lurid and sensational (note that the Google searches are similarly off) and this permeates all levels of life. vgetr on June 14, 2019 | prev | next [–] I feel like things were mostly fine until the 2016 election, after which journalists became _very_ concerned. If I had a nickel for each, “The algorithms are coming! The algorithms are coming!”, I’d be rich. I mean, I didn’t like the outcome either, but these types of articles seem to motivated by a) finding a scapegoat and b) wanting to use “algorithm” in a sentence. bjt2n3904 on June 14, 2019 | prev | next [–] What a pleasant way of stating that humans are basically good. We just keep passing the buck. "We'd be fine if it weren't for this algorithm!" We believe that man is essentially good. It’s only his behavior that lets him down. This is the fault of society. Society is the fault of conditions. Conditions are the fault of society. If you ask me, "YouTube's algorithm" is simply exposing the way humanity is. And trying to get an algorithm to "shepherd" humanity to be better is simply Orwellian. module0000 on June 14, 2019 | prev | next [–] > If YouTube won’t remove the algorithm, it must, at the very least, make significant changes It must? No, it doesn't have to do a damn thing. It's a product from a publicly traded company, therefore it "must" return value for stockholders. That means more behavior that increases ad revenue. The author is out of touch with reality. Stop feeding your kids youtube if you don't want them exposed to youtube. It's a private service(youtube), not a public park. drewbug01 on June 14, 2019 | parent | next [–] > It must? No, it doesn't have to do a damn thing. Subject to the laws of the jurisdiction in which it operates, of course. We could - if we so wanted - pass laws to regulate this behavior. That is perhaps the best option, in my own opinion. > It's a product from a publicly traded company, therefore it "must" return value for stockholders. The dogma that it "must" return value for shareholders is not an absolute rule[1]; rather it's a set of market expectations and some decisions from Delaware (which have an outsize impact on business law) that encourage it. But it's not required. In fact, many states allow a type of corporation that specifically and directly allows directors to pursue non-shareholder-value goals - the benefit corporation[2]. > The author is out of touch with reality. Please re-read the HN guidelines[3]. > Stop feeding your kids youtube if you don't want them exposed to youtube. It's a private service(youtube), not a public park. This is the doctrine of "caveat emptor," essentially - that a consumer is ultimately responsible for all behavior. However, a wealth of regulation exists because that's unworkable in practice. The FDA and the EPA come to mind, but we also regulate concepts like "false advertising." Your stance here ignores the realities of life in service of ideological purism. [1] http://web.archive.org/web/20190327123200/https://www.washin... [2] https://en.wikipedia.org/wiki/Benefit_corporation [3] https://news.ycombinator.com/newsguidelines.html Nasrudith on June 14, 2019 | root | parent | next [–] No we cannot pass laws that do that no matter how indignant we may be. The whole bloody point of the constitution is that no matter how pissed off the majority (or "the majority" which is just a noisy minority as it may be) is that you cannot simply legislate away rights. The vague "do something!" regulation push has all of the marks of a moral panic and all participants should slap themselves hard enough to leave a mark and repeat "It is never too import to be rational." AlexandrB on June 15, 2019 | root | parent | next [–] Please explain what rights would be legislated away in this case. It's definitely not the 1st amendment - you can still say what you want, just not on necessarily on the platform of your choice. This was equally true in the broadcast TV days. So what other right(s) would be legislated away by regulating Youtube's content? Nasrudith on June 15, 2019 | root | parent | next [–] Broadcasters had the special pleading with some scintilla of a point in that there were actual shared commons to prioritize. In practice it was a fig-leaf as you never saw arguements in broadcast censorship over 'values' to wrestle over airwave ownership but instead bullshit doctrines like 'community standards'. The fact that the US has a long history of laying out rights for all, seeing the revolutionary implications and then saying 'No wait that can't be right it is too different.' and going back to the bullshit control they had before for a few centuries is a whole other sad topic. One thing that did make it through that was the ruling that mediums which lack said limitation like cable and internet don't have the rationale for that restriction and thus the censorship that weak minds had become accustomed to vanished in a puff of logic. This has been the case since cable porn channels were a thing. By regulating YouTube you effectively regulate what /all/ platforms may push. It isn't simply that YouTube decides that "You know what we don't want to post that." - an exercise of their collective Freedom of Association but "The government doesn't want us to post that so we can't." You can't just deputize tasks to third parties and expect the limits on exercises of power to vanish. Otherwise we'd see hordes of private detectives as a work around to Fourth Amendment rights. Said regulations on youtube would be a major infringement upon freedom of the press and speech. Not to mention it is logically equivalent to censoring your own press is whenever it fits whatever criteria they dislike. FabHK on June 14, 2019 | parent | prev | next [–] No. As you yourself recognise (presumably, as you put the "must" in scare quotes and italics), that companies "must" maximise shareholder value is a goal contingent on our decisions and policies, not some natural law. Of course, it is incumbent on us individually to behave responsibly. But there is space for public policy and regulation, even of YouTube. fzeroracer on June 14, 2019 | parent | prev | next [–] Incentivizing value for stockholders above all else is a good way to ensure incredibly anti-consumer practices grow in popularity. Something you might only begin to notice when your kids start getting recommended games from their friends that require you to gamble with IAP to get a new dance or something. bryant on June 14, 2019 | prev | next [–] This seems like it takes some notes from Veritasium's theory on YouTube's recommendation algorithm which he posted after his initial reservoir shade balls video went viral. (edited for clarity) https://www.youtube.com/watch?v=fHsa9DqmId8 for his theory. raz32dust on June 14, 2019 | prev | next [–] YouTube's incentives are the best among such platforms IMO. They allow a simple profit sharing model where a part of the ad-revenue goes to the content creator. This is unlike instagram, for example, where the content creators have to peddle products in their ads to make money. Take a fitness channel for example - on YouTube, the content creator can just be honest, and the views alone will guarantee income. On the other hand, on instagram, they have to resort to selling snake oil. I love YouTube for this, and I am constantly amazed by how YouTube has become a livelihood earner for so many. kauffj on June 14, 2019 | prev | next [–] http://archive.is/6lbCR (Archive link for those who prefer non-broken web experiences) everyoneisbias on June 14, 2019 | prev | next [–] It's all about advertising money. TV and newspapers are dying and they need someone to blame. restingrobot on June 14, 2019 | parent | next [–] I personally think this has deeper political motives as well, but yes I completely agree with you! jaydz on June 14, 2019 | parent | prev | next [–] I'm sure Google and Facebook understand this, hopefully they won't cower any further. Big Media wants its "fair share" and they will keep attacking until they do. Analemma_ on June 14, 2019 | prev | next [–] I don't know if YouTube's problems are so bad that the argument applies in this case, but in general, "We can't comply with this regulation, it would be too difficult at our scale" is not considered a valid defense. Just as banks shouldn't be allowed to get so large that they can't fail without wreaking havoc on the economy, if algorithmic recommendation and moderation can't work, then maybe social networks shouldn't be allowed to get so large that human moderation is not possible. restingrobot on June 14, 2019 | parent | next [–] That is an apples to oranges comparison, Youtube is a platform not an institution. It is open to all videos, provided they meet certain agreed upon guidelines, and should not be responsible for censoring content based on individual opinions. I don't think that the recommendation is broken at all, in fact it works astonishingly well for the vast majority of people. The fact that there are a few bad actors is also present in the banking industry, (Wells Fargo for instance), to use your own bad comparison. munk-a on June 14, 2019 | root | parent | next [–] YouTube is asserting editorial and publishing rights when it promotes certain videos, if it were a pure video hosting site (providing a link to uploaded videos for people to do with as they please) then I'd agree they were just a platform, but a newspaper isn't a platform and neither is YouTube. restingrobot on June 14, 2019 | root | parent | next [–] Youtube is asserting on behalf of people who own the publishing rights and not on behalf of themselves. This is an important distinction. Youtube is not the same as a Newspaper in any way shape or form, I don't really understand your comparison. skybrian on June 14, 2019 | parent | prev | next [–] The queue for getting your video posted on YouTube would grow infinitely. (Or, more realistically, people would give up and not bother once it takes years.) But I guess they could charge money to get to the head of the line? ilikehurdles on June 14, 2019 | root | parent | next [–] The queue for having your video uploaded and public does not at all have to be the same queue for getting your video included in others' recommendations. nostrademons on June 14, 2019 | root | parent | next [–] I can just see the outrage now: "YouTube running a pay-to-play scheme for exposure. Anyone can upload their video, but only the rich can get an audience!" Come to think of it, this is basically the complaint against AdWords and the gradual takeover of the search result page by paid results. shearskill on June 15, 2019 | root | parent | next [–] This is exactly what happens. Prager U and Ben Shapiro advertise heavily on content adjacent to them (gaming) and their views go up, up they go in the algorithm. hrktb on June 14, 2019 | root | parent | prev | next [–] There could be a middle ground where videos have limited visibility until getting vetted, or a karma system to fast track regular uploaders etc. I think there’s a ton of ideas to be tried. PretzelFisch on June 14, 2019 | root | parent | prev | next [–] That's not true you can upload a video and not allow it to be recommended until some human review was done. Most youtube channels don't need the recommendation engine. LocalPCGuy on June 14, 2019 | root | parent | next [–] That just isn't feasible. Videos would literally take years to get into the recommended status - another comment pointed out there are 500 new videos uploaded per SECOND. munk-a on June 14, 2019 | root | parent | next [–] If there was one dude, sure. But apparently YouTube is in the business of supporting the upload of 500 videos/second so they need to deal with the consequences of it. It's not like there's any regulation forcing them to be the place everyone uploads videos to and there are some valid competitors (though they're far less into the publishing/editorializing facet - vimeo is much more often direct linked for instance) jerf on June 14, 2019 | root | parent | next [–] To be clear, I am not speaking for anybody in this thread but myself. But I will unapologetically and forthrightedly say that, yes, if we're going to assert that YouTube has certain responsibilities for the nature of the videos that it hosts, and that it turns out that the nature of those responsibilities is such that YouTube can't possible meet them, then, yes, YouTube as we know it should be essentially shut down, at least going forward. I am NOT going to say we should deliberately craft the responsibilities in such a way that YouTube is deliberately shut down. However, if it turns out that they are incapable of applying even the bare minimum effort that we as a society deem it necessary for them to apply, then, yes, it is absolutely a consequence that YouTube as we know it today may have to be so radically altered as to be a different site entirely. In the general case, when the law requires certain obligations of you as a business, and you as a business can not meet them, that does not mean that suddenly those obligations are not applied to you. It means that your business is not legally viable, and needs to change until it is. It may be the case that there is no solution to being legally viable and being profitable, in which case, your business will cease to exist. Just as there is, for instance, no solution to being a business built around selling torrent files containing unlicensed commercial content to people. You can't defend yourself by saying you can't afford to get the licenses; your suitable legal remedy was to never have started this business in the first place. There's some concerns around grandfathering here to deal with, certainly, but they can still be affected going forward. There is no guarantee that there is a solution where a company exerting whatever minimal control they are obligated to assert by society is capable of growing to the size of YouTube. If that is the case, so be it. The solution is not to just let them go because they happened to grow fast first. (My solution to freedom of expression is an explosion of video sites, where each of them has ways of holding the videos to the societally-mandated minimum standard, and no one site can do it all because they simply can't muster the resources to be The One Site, because as they grow larger they encounter anti-scaling effects. Given how increasingly censorious Silicon Valley is becoming, as we are now into censoring the discussions about censoring discussions like the recent removal of Project Veritas from Twitter for its discussion of Pinterest censoring pro-life films, I expect this to increase the range of expression, not diminish it.) nostrademons on June 14, 2019 | root | parent | next [–] Not speaking on behalf of what I want, but on behalf of what is true: > It may be the case that there is no solution to being legally viable and being profitable, in which case, your business will cease to exist. Or your business will exist illegally. There's this interesting interplay between law and economics, where law is generally taken as a prerequisite for frictionless commerce, and yet at the same time if activities that large groups of people wish to partake in are made illegal, the market just routes around them and black markets spring up to provide them. Prohibition. The War on Drugs. Filesharing. Gambling. Employing illegal immigrants. Usury. Short-term rentals. Taxi medallions. Large swaths of the economy under communism. There are a couple other interesting phenomena related to this: the very illegality of the activity tends to create large profits around it (because it creates barriers to entry, such that the market often ends up monopolized by a small cartel), and the existence of widespread black markets erodes respect for rule of law itself. When people see people around them getting very rich or otherwise deriving benefit from flouting the law, why should they follow it? Switching to editorializing mode, I think that this gradual erosion of respect for law to be quite troubling, and I also think that the solution to it needs to be two-fold: stop trying to outlaw behaviors that are offensive to some but beloved by others, and start enforcing laws that if neglected really will result in the destruction of the system. jerf on June 17, 2019 | root | parent | next [–] "Or your business will exist illegally." True. In the context of this particular case, I was assuming that nothing the current size of YouTube could exist illegally, as that would imply that whatever authority was declaring them "illegal", but not capable of doing anything about it despite it nominally living in its jurisdiction, must be anemic and impotent to the point of being nearly non-existent. There's already an underground proliferation of video sites, spreading copyrighted content out of the bounds of what the rightsholders want, so it's pretty much assured we'd end up with illegal alternatives. :) andromeduck on June 14, 2019 | root | parent | prev | next [5 more] PretzelFisch on June 14, 2019 | root | parent | prev | next [–] Some of that can be alleviated by trusted publishers, ie fox,cbs,abc... Won't need a review. Introduction of a paid queue. Just because they don't want to do it today doesn't mean it's an impossible solution just a hard one. Nasrudith on June 14, 2019 | root | parent | next [–] That sounds like the exact shit people left TV for. Lets not recreate television oligarchies for the sake of those afraid of change. v7p1Qbt1im on June 18, 2019 | root | parent | prev | next [–] > Most youtube channels don't need the recommendation engine. This is just not true. A massive part of the views originate from recommended/up next. Ask pretty much any creator. Only the core audience of a channel will have the notification bell on for a specific channel. Many users don't check the Subscription section and either link in from an external source, know beforehand what they want to search for or just watch what pops up in recommended. peteretep on June 14, 2019 | parent | prev | next [–] > but in general, "We can't comply with this regulation, it would be too difficult at our scale" is not considered a valid defense This is a great point that I was going to phrase slightly differently: if YouTube is too large to be able to prevent harm, YouTube needs to be regulated. YouTube get the benefit of being so large, so they should also get the cost. aaomidi on June 14, 2019 | parent | prev | next [–] Agree with you. If you can't do your job then maybe you'll have to be shut down. andromeduck on June 14, 2019 | root | parent | next [–] Since when did it become YouTube's responsibility to police speech?! yoz-y on June 14, 2019 | root | parent | next [–] Disclaimer: I work for YouTube, my personal view on the situation is this: Bear in mind that YouTube does not operate only in the US with unhinged free speech laws. Many countries have stricter laws and YouTube definitely needs to comply with them. Other than that, adpocalypse happened because of bad videos being surfaced by the algorithm so another responsibility is to the creators. (And shareholders) There is nothing to be gained by having crap in your backyard. aaomidi on June 15, 2019 | root | parent | prev | next [–] It did when people started demanding it. A company doesn't exist in a vacuum. tomgp on June 14, 2019 | root | parent | prev | next [–] When they started making editorial decisions about which videos to promote and to whom -albeit via an automated process. nradov on June 14, 2019 | parent | prev | next [–] YouTube needs no defense in this case because video recommendations are protected free speech. In the US at least it would be impossible to outlaw video recommendations in a way that would pass Constitutional review. sneakernets on June 14, 2019 | prev | next [–] In addition to this, seeing content creators being slaves to the algorithm is an eye-opening experience. Especially when it comes to the children's videos. It's all computer generated garbage powered by responses to changes in algorithms. If kids suddenly watch more content with alligators, prepare for that being the only thing created, recommended or playing. It's wild. tunesmith on June 14, 2019 | prev | next [–] Still looking for recommendations that are influenced by people's best-self intentions of who they want to be, rather than influenced by their worst-self behaviors. edoo on June 14, 2019 | prev | next [–] We know we have to keep kids sheltered from people which may have unscrupulous neural networks at play looking for a way to further their own goals at the expense of a child's well being and overall health. Engagement on the internet is also being driven by neural networks that are learning to adapt to the users brain chemistry to statistically modify behavior for maximum engagement/profit. Perhaps it is time to realize that these services are going to be analogous to a random stranger offering your kid candy for their own twisted goals that are unlikely compatible with a child's well being. If you consider a service like YouTube as an untrusted source of interaction perhaps you'll be as likely to block or monitor it the same as random chat rooms. toss1 on June 14, 2019 | prev | next [–] YouTube can be a source of astonishingly great education and entertainment that will help grow society, as well as astonishingly horrid corruptions of nearly anything that will corrode society at it's roots. Most of these discussion posts seem to miss the point that 'engagement' or 'upvotes' does NOT equal value. Also missing is the concept that a company with a massive platform has any social responsibility to at least not poison the well of society. And claiming "it's the parent's responsibility" may have some truth value, but it does not and should not be an excuse to absolve the platform owner of responsibility. The key to longer term success of the platforms is to abandon the low-hanging-fruit of "engagement" as a measure value and develop more substantitive metrics that actually relate to value delivered, both to the individual watcher and society as a whole. As one audience member, I find their recommendations to be basically crap, nearly never leading me to something more valuable than what I just watched (sure, they'll occasionally put up a recommendation that has enough entertainment value to watch, but much of the time I want my 5min back). To find any real value, I need to search again. That already tells us that their "engagement"-based algos are insufficient to serve the needs. gersh on June 15, 2019 | prev | next [–] I think there is an inherent in optimizing for retention time. Ideally, the recommendation should help find stuff which improve people's health, make them happier, or more informed about the world. However, it doesn't seem like YouTube has metrics on those things. Furthermore, things like that probably can't be determined very quickly on new content. diogenescynic on June 14, 2019 | prev | next [–] I mostly watch movie reviews on YouTube and I'm constantly being recommended either weird Joe Rogan, alt-right content, or makeup videos. I don't get it. I've never clicked or watched anything remotely associated with it. I suspect a lot of the popular YouTube channel's are gaming the algorithms or SEO their videos to get more recommendations. restingrobot on June 14, 2019 | parent | next [–] The neural network takes into account videos that other people who watched the same one you did watched. It's quite possible that the movie trailer you watched was popular among demographics that also watched those recommendations. If you don't have a lot of data for yourself, you will see a heavier bias towards other people's videos. JDiculous on June 18, 2019 | prev | next [–] The "wrong behavior" that Youtube incentives is promoting and encouraging clickbait garbage content (just look at the default homepage). The holy metric now is "watch time", the result being that creators stretch out their content to 10 minutes because then Youtube is more likely to promote it (and midroll add = twice the revenue). Yesterday Youtube recommended me a 10 minute video of some guy explaining how he made this simple drone shot that could've been condensed down to a single sentence - "Turn sensors off". What a waste of time. But hey they're a corporation and thus have no accountability to the public good. umvi on June 14, 2019 | prev | next [–] Does the algorithm incentivize bad behavior or simply reflect the desires of the viewers? Someone watching lots of DIY home repair videos will start seeing more. In that case it seems like it's incentivizing good behavior. Likewise, someone watching lots of soft porn on YouTube will be recommended more soft porn. Looks like there’s a problem with this site https://chatgpt.com/?q=I%u2019m on page “Post: Edit” with “wyoh on June 14, 2019 | prev | next [2 more]%0A%0A%09%0Aalt_f4 on June 15, 2019 | prev | next [–]%0A%0Ayet another NYT anti-tech hit piece%0A%0A%09%0Aannadane on June 14, 2019 [flagged] | prev | next [19 more]%0A%0A%09%0Ailikehurdles on June 14, 2019 | prev | next [–]%0A%0AGoogle absolutely can do all of those things without an algorithm. What they can't do is accomplish that without impacting profit margins (or at the minimum, executive bonuses). "If it impacts business as usual, then it is impossible" is a naive/flawed/libertarian stance.%0A%0A%09%0Axondono on June 14, 2019 | parent | next [–]%0A%0AYou do realize that to cover current needs (400h uploaded every minute), YouTube would need to employ more than 72000 people working full time right?%0A%0A%09%0Aemmp on June 14, 2019 | root | parent | next [–]%0A%0AAnd these people would inevitably make some number of mistakes in categorization too, or miss something, or just be unable to quite hit some baseline universal standard that doesn't upset a group. Then YouTube still gets the bad press.%0A%0A%09%0Abjourne on June 14, 2019 | root | parent | prev | next [–]%0A%0ABut 99.9% of all videos uploaded never gets more than a few handfuls of views so those are irrelevant. Of the remaining 0.1%, you don't need to watch every second of every frame - speeding it through at twice the speed should be doable. So by your own calculations, 72 000 * 0.001 * 0.5 = 36 people working full time.%0A%0A%09%0Axondono on June 15, 2019 | root | parent | next [–]%0A%0AYou can set that 0.001 factor as big or as low as you like, but then we%u2019d get the same nytimes hit piece saying this is intentionally being done by humans.%0A%0A%09%0Athrowaway287391 on June 14, 2019 | parent | prev | next [–]%0A%0AYou made me curious so I did some back-of-the-envelope math. An average of 576K hours of video is uploaded to YouTube every day [1], which is 4.032M hours per week. If the reviewers watch all the video at 1x speed and work 40 hours per week, you'd need about 100K reviewers to do the job. (This is just to watch the video -- not including any additional work done to annotate the video with whatever information you want out of your reviewers.) If each one costs $30K a year (probably a lowball estimate including salary, insurance, etc.) it would cost a total of $3B per year. YouTube makes $4B in revenue per year and roughly zero profit AFAICT, so there's no way this is feasible.%0A[1] https://www.quora.com/How-many-videos-are-uploaded-on-YouTub...%0A%0A%0A%09%0Ascarface74 on June 14, 2019 | parent | prev | next [–]%0A%0AI%u2019m usually a proponent of the “wall garden” when it comes to applications and strict sandboxing for most users, since software can harm your computer.%0ABut in the case of YouTube, there is absolutely no way that they can curate it and it still being as open as it is.%0A%0A%0A%09%0Ailikehurdles on June 14, 2019 | root | parent | next [–]%0A%0AThere is no need to curate every video, only the ones qualified enough to be recommended/showcased to the public who is not explicily looking for them.%0A%0A%09%0Apatorjk on June 14, 2019 | root | parent | next [–]%0A%0ASay I watch a video on a topic like "nes video game speed running". Right now I'd see other nes video game speed running videos, it's very useful. In a curated world, what would be recommended? It's probably too much of a niche topic to yield results that would be very useful.%0A%0A%09%0Absder on June 14, 2019 | root | parent | prev | next [–]%0A%0A> But in the case of YouTube, there is absolutely no way that they can curate it and it still being as open as it is.%0ASo?%0A%0AIf YouTube exits the space and allows oxygen back into the video sharing market, we might actually get some different video sharing services that do different things (a la NicoNicoDouga).%0A%0A%0A%09%0Ascarface74 on June 14, 2019 | root | parent | next [–]%0A%0AVideo streaming, processing, and storage at scale still costs a lot of money. I don%u2019t think even Google is doing it profitably.%0A%0A%09%0Aphysics515 on June 14, 2019 | parent | prev | next [–]%0A%0AYouTube does human curation already. They are refered to as "playlist" and every user has the ability to create and share them. So what you are asking for is Google to create their own playlist? Would this also entail removing that ability from other users?%0A%0A%09%0Aemilfihlman on June 15, 2019 | prev | next [–]%0A%0AI mean PewDiePie's info is rather public but what's with the need to "dox" him right in the beginning?%0A%0A%09%0A0815test on June 14, 2019 | prev | next [–]%0A%0AQuite true, but let's not pretend that Twittr, Tumbler and Fakebook aren't also "incenting" all sorts of distorted behaviors of their own! These sites are "algorithms" all the same, even if the workings of these algorithms are in some ways more transparent. We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case.%0A%0A%09%0Ankozyra on June 14, 2019 | parent | next [–]%0A%0A> We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case.%0AThis always sounds good, but decentralized is nearly impossible to commoditize or make appealing to the general public. Outside of evangelism and word-of-mouth, how are people going to escape the Youtube advertising budget and instead choose - en masse - the product that is better for their privacy?%0A%0AThere's just so much money and inertia to fight.%0A%0A%0A%09%0Aswiley on June 14, 2019 | root | parent | next [–]%0A%0AYouTube removing harmless content over copyright etc is one way.%0A%0A%09%0Aphreeza on June 14, 2019 | parent | prev | next [–]%0A%0AIf the ranking algorithm is open for all to see, won't that encourage even worse gaming of the system? I am trying to think of comparable situations in existing open systems, but none come to mind.%0A%0A%09%0Athephyber on June 14, 2019 | parent | prev | next [–]%0A%0A> We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another%0AWe already have them, yet FB, IG, Twitter, YT are the social media behemoths.%0A%0AAre you making a plea for the average internet person to care about the values of the platforms they use over the platform content? You are likely preaching to the choir here on HN, but I would guess that the audience here is only 1% of 1% of the audience you need to message.%0A%0ACorps make good use of psychological experiments to optimize their utility function. "Evil is efficient." The problem is that companies optimize for money without taking into account any other factor in any significant way.%0A%0A> In 1970, Nobel Prize–winning economist Milton Friedman published an essay in The New York Times Magazine titled “The Social Responsibility of Business Is to Increase Its Profits.” [1]%0A%0AArguably this quote incentivized the destruction of "good corporate citizenship" (although I admit it's possible that concept never existed in a broad sense).%0A%0A[1] https://www.newsweek.com/2017/04/14/harvard-business-school-...%0A%0A### Summary of Discussion%0A%0A#### **Call for Federation**%0A%0A* Advocacy for open, federated platforms like Mastodon, Matrix, and ActivityPub.%0A* Goal: Let users easily switch platforms if they dislike one algorithm.%0A%0A#### **Current Reality**%0A%0A* Despite federation tech existing, centralized platforms (FB, IG, Twitter, YT) dominate.%0A%0A#### **Audience Concern**%0A%0A* Poster questions if the message is aimed at everyday users or just the tech-savvy.%0A* HN community is niche—far from the mass audience needed for change.%0A%0A#### **Corporate Incentives**%0A%0A* Corporations optimize for profit using psychological tactics.%0A* "Evil is efficient" — optimizing without ethical constraints works well.%0A%0A#### **Philosophical Note**%0A%0A* Reference to Milton Friedman%u2019s 1970 essay asserting profit as the sole responsibility of business.%0A* Implication: This mindset may have undermined ethical corporate behavior.%0A%0A%09%0Atqi on June 14, 2019 | parent | prev | next [–]%0A%0AI think the author's issue is not that her recommendations are bad, but that other people are getting recommendations for things she disagrees with (ie conspiracy theory videos, child-unsafe content, etc). So I don't think she would view decentralization as a win.%0A%0A%09%0Ajiveturkey on June 14, 2019 | prev [–]%0A%0AWow, I'm conflicted. First, an obvious idiot statement, which helps us ground our analysis:%0A> Human intuition can recognize motives in people%u2019s viewing decisions, and can step in to discourage that — which most likely would have happened if videos were being recommended by humans, and not a computer. But to YouTube%u2019s nuance-blind algorithm — trained to think with simple logic — serving up more videos to sate a sadist%u2019s appetite is a job well done.%0A%0ASo this person is advocating that a human (ie, another human besides oneself, an employee at youtube), have access to the click stream of individual users? This proposal, in 2019??? Of course this would have to be compulsory to be effective. Why would I want a megacorp to be making moral decisions for me? I'm ok with them making amoral algorithmic decisions.%0A%0AThe author is generalizing the problem of YT Kids, which should be human curated, to all of youtube.%0A%0AOTOH, yeah feeding our worst impulses is kind of a problem. But, tweaking the algorithm isn't the solution. The machine itself is designed to thrive on attention.%0AsRGB Intent: 0%0AX PixelsHere is the translation of the provided text to English:%0A%0A---%0A%0APer Meter: 3,780%0AY Pixels Per Meter: 3,780%0A%0A---%0A%0AThe rest of the text appears to be HTML code, which doesn't contain meaningful content to translate. If you need assistance with something specific in the HTML or any other part, please let me know! style="float: right;">


rn, networked and increasingly polarised world. It is also important to underline from the outset that, while much of the contemporary furor about mis-information has focused on its political varieties, ‘information pollution%u20196 contaminates public discourse on a range of issues. For example, medical mis-information has always posed a worldwide threat to health, and research has demonstrated how incorrect treatment advice is perpetuated through spoken rumours7 , tweets8 , Google results9 and Pinterest boards10 . Furthermore, in the realm of climate change, a recent study examined the impact of exposure to climate-related conspiracy theories. It found that exposure to such theories created a sense of powerlessness, resulting in disengagement fro” selected.%0A%0APlease proofread the selection for spelling and grammar errors. Identify any mistakes and provide a corrected version of the text. Maintain the meaning and factual accuracy and output the list of proposed corrections first, followed by the final, corrected version of the text. sent back an error. Error code: 431 Request Header Fields Too Large Check to make sure you’ve typed the website address correctly. The algorithm isn't responsible for helping you make good life choices. The algorithm is responsible for recommending videos that you would like, and it seems like it does a good job of that, generally. Unfortunately, some people like bad things and that's an age old problem that is hard to fix. That said, it would be nice if users could CTOA (choose their own algorithm) instead of letting Google be the sole gatekeeper. detaro on June 14, 2019 | parent | next [–] In my experience, some ~20 % of recommendations almost always go towards noticably more clickbaity/low-quality content (with rest often fitting quite well, or at least being of a similar level, just on topics that happen to not interest me right now), and as soon as you make the mistake to click one of them it shoots up dramatically. I've taken to open videos I'm not sure about in a private tab to avoid 2 weeks of crap recommendations. bubblewrap on June 15, 2019 | parent | prev | next [–] I guess you could compare it to criminals using the telephone. Invention of the telephone helped a lot of people, but unfortunately it also helps criminals. Likewise the YouTube algorithm helps many people, but criminals or unwanted people (like pedophiles) can also use it. It's ok to think about ways to prevent it, but I don't think it should be the first concern. Imagine outlawing telephony, because criminals could benefit from its use. DanBC on June 15, 2019 | root | parent | next [–] Telephony is an interesting example because for many many years it had very tight restrictions. Telephone service providers were monopolies, sometimes government monopolies. There was only one type of telephone you could use, and that was supplied by that same monopoly. It was illegal to attach anything else to the line either directly or indirectly. There were even (outside the US) laws on what you could say when talking on the phone. Here's an article from 1994 about a modem maker who had products that were not officially licensed to connect to the network. https://www.newscientist.com/article/mg14219263-000-technolo... posterboy on June 14, 2019 | parent | prev | next [–] It is not the case that someone watching a certain topic will see videos exclusively tailored to their taste. More over it is rarely the case that someone watches something specifically definable to the exclusion of anything less specific, because that desire will ideally be quickly saturated. And if it isn't then the recommendations are still rubbish. jerrac on June 14, 2019 | prev | next [–] Two ideas come to mind. First, make the engine recommend a few videos that it thinks you probably won't watch. That could help break up the echo chamber effect. Second, allow users to blacklist, or whitelist, different kinds of content. If someone is struggling with sexual attraction to minors, let them blacklist content with minors in it. If I don't want to see the latest anti(or pro)- videos, I should be able to filter them out. I have no interest in Minecraft, so why should I have to keep scrolling past Minecraft videos just because I watch a lot of game related videos? That said, all the calls for regulation or censorship concern me. I haven't seen the video, but Steven Crowder saying mean things isn't exactly something that should be censored. Any more than all the videos calling President Trump names. What I'm seeing signs of is a society that silences any speech that doesn't fit in a specific, politically correct, box. And that box is being defined by advertising companies who don't want to be associated with topics that their potential customers find uncomfortable. That's not a direction any of us should support... bubblewrap on June 15, 2019 | prev | next [–] There seem to be some extreme cases where the algorithm fails. That doesn't imply in general it doesn't work well. Sounds again like hyperbole from the NYT. I find it more interesting to consider what would actually be a good outcome for the viewers. I suppose originally all those recommender algorithms simply optimized for viewer engagement. Obviously that may not be the best outcome for consumers. Perhaps enraging content makes people stick on a platform longer, for example. But it would be "better" for a viewer to see more educational content and even to disconnect after a certain while. But how would you even quantify that, for the algorithm to be able to train for it? The son of a friend of mine taught himself programming from YouTube videos, which YouTube had recommended to him. I wouldn't complain about a result like that. cyrksoft on June 15, 2019 | prev | next [–] Big Media is dying and they are desperate to shut down competition. This is a political article and nothing more. kodz4 on June 14, 2019 | prev | next [–] It's not complicated. They need to stop showing people the upvote and view COUNTS. Behind the scenes they can still use it to make recommendations. Those numbers are pseudo signals of quality to people who encounter content they have never encountered before. Even when they have doubts that are watching something unhealthy the mind goes "well if the rest of the world thinks this dumbass is important I better pay attention..." If a dumbass hurting people on video gets 10 million views other dumbasses worldwide automatically get triggered looking at the count. "hey I can do this maybe I should run for President..." Remove the counts and you remove the pseudo signal of quality. verall on June 14, 2019 | parent | next [–] It is complicated. I think thats a bad solution. I want to see the counts. I feel it is far more transparent to see the counts than for things to just be surfaced or not opaquely. Youtube is not a discussion site and it does not work as one. How popular things are is a part of the context of pop culture, and most youtube content is pop culture. cheez on June 14, 2019 | root | parent | next [–] Every single day, I watch the channel of a guy who has put out < 15 minute videos going back to nearly the founding of YouTube. He gets an average of 10-15 views per day. The value this guy adds to my day is literally measurable in $$$. If I could find more people like him, that would be great, but instead these are my recommendations: - 5 ways to do X - Bill Gates breaks down blah blah blah - Something about Tesla - One video by a guy I discovered outside of YouTube who is similar to the guy I watch every day. I don't watch this one that much though. YouTube's algorithm is not designed for discovery. It's designed for engagement. So I keep separate accounts: 1. Account for actually useful stuff where YT's recommendations are useless 2. Account where YT's recommendations are OK: white noise like things. Howard Stern interviews, etc I wish you could configure the algorithm for discovery somehow. onemoresoop on June 14, 2019 | root | parent | next [–] Absolutely. There are gems on YouTube that are not only hard but almost impossible to find due to the flood of crap they repeatedly recommend me. As far as I am concerned the algorithm is broken and almost killed my YouTube experience(I have to admit that I'm still on YouTube but a lot less these days). I figure that they probably don't give a damn about users like me, the algorithm is designed to steer traffic to a pyramid of monetized content and I don't seem to have any options to fight the trend but to disengage. There are some channels/users that I started following a long time ago but after I watch one of their videos I land back on the crapflood. verall on June 14, 2019 | root | parent | prev | next [–] I completely agree, and for a good example of "better", I think spotify's discovery algorithms are "pretty alright". It's less likely to get stuck in a rut. Youtube is happy to try to bring you down a rabbit hole. And content-creators play a part in this: next time you hear about some pop-drama do a youtube search and admire how many videos are a single person just reblabbing the same story in front of a mic, cam, or videogames. You'll find hundreds. And so many things on youtube are like this... makomk on June 14, 2019 | root | parent | prev | next [–] I'm pretty sure that YouTube used to be better at recommending obscure long-tail videos but cracked down on it a while ago precisely because of articles like this one - now only videos from relatively big channels which have undergone a certain amount of minimal manual scrutiny gets recommended. Nasrudith on June 14, 2019 | root | parent | prev | next [–] Of course it is a matter of metrics - it has no way of knowing what is useful. The closest way to algorithmically discover (outcomes over time) would be prone to spurious correlations and be so intrusive it would make Cambridge Analytica look like LavaBit. cheez on June 15, 2019 | root | parent | next [–] I'm thinking "make things more discoverable" than "find more useful things" if that makes sense. I'm willing to wade through it myself if you present me with options. asdff on June 14, 2019 | root | parent | prev | next [–] What about searching for keywords? That's how youtube discovery worked before recommendations came about and it worked fine (still does). cheez on June 15, 2019 | root | parent | next [–] Yes I do that occasionally when trying to solve a specific problem. Often helps. naravara on June 14, 2019 | root | parent | prev | next [–] >How popular things are is a part of the context of pop culture, and most youtube content is pop culture. Only with respect to people you know talking about it. Not just arbitrary metrics. Rating systems are part of the context of putting valuations on ads, not part of culture. Whatever impact they do have is based on advertisers trying to reel you in by applying the bandwagon fallacy and stoking your sense of FOMO. It's not something edifying. smt88 on June 14, 2019 | root | parent | prev | next [–] > How popular things are is a part of the context of pop culture, and most youtube content is pop culture. I can't think of any traditional medium that tells you the popularity of something before you consume it. Movie theaters, TV stations, radio stations, etc. have no concept of "view counts" telling you whether or not to consume something. rexpop on June 14, 2019 | root | parent | next [–] > I can't think of any Well, information IS available, beforehand in nielson ratings and films' grossing numbers, but you're essentially right. That's the problem: opaqueness leaves us vulnerable to being misled. Some PR company calls it "the hottest ticket of the season," and we have no way of corroborating this claim. verall on June 14, 2019 | root | parent | prev | next [–] Uh, they don't have view counts, but they certainly tell you when things are popular. These are bad examples because all of these have very public "view counter"-alikes. First-weekend box office for ""popular"" movies is reported in news media. TV stations have ratings. Pop music has Billboard. In fact we have a local "Top 50" station which only plays ""popular"" music. View counts ~= box office take ~= TV ratings ~= Billboard. Every type of media you list has gatekeepers, kingmakers, and counters, and other things influencing your to or not to consume. smt88 on June 15, 2019 | root | parent | next [–] I have never met anyone who chooses their movies based on box office, nor have I met anyone who chooses TV shows based on their ratings. Those are all after-the-fact consumption stats, unlike YouTube view counts, which are shown to you upfront (without you looking for them). jakear on June 14, 2019 | parent | prev | next [–] Fully agree. Instagram is removing like counts (or at least looking into it). I think this a great path forward for the industry. Too often people see “popular” as “correct and not needing question”. Edit: the Instagram motivation is admittedly a bit different, but a good path regardless notenoughbeans on June 14, 2019 | parent | prev | next [–] That only works for a specific usecase. I've been looking at videos on how to drywall. Views and upvotes helped me find the most useful instructionals and skip the bad ones. View counts matter, but they certainly indicate when things are popular. These are poor examples because all of these have very public "view counter"-like equivalents. First-weekend box office for "popular" movies is reported in the news media. TV stations have ratings. Pop music has Billboard. In fact, we have a local "Top 50" station that only plays "popular" music. View counts ≈ box office take ≈ TV ratings ≈ Billboard. Every type of media you list has gatekeepers, kingmakers, and counters, and other factors influencing whether or not you consume it. --- smt88 on June 15, 2019 | root | parent | next [–] I have never met anyone who chooses their movies based on box office numbers, nor have I met anyone who chooses TV shows based on their ratings. Those are all after-the-fact consumption stats, unlike YouTube view counts, which are shown to you upfront (without you looking for them). --- jakear on June 14, 2019 | parent | prev | next [–] Fully agree. Instagram is removing like counts (or at least looking into it). I think this is a great path forward for the industry. Too often people see “popular” as “correct and not needing question.” Edit: the Instagram motivation is admittedly a bit different, but a good path regardless. --- notenoughbeans on June 14, 2019 | parent | prev | next [–] That only works for a specific use case. I've been looking at videos on how to drywall. Views and upvotes helped me find the most useful instructionals and skip the bad ones. --- lstamour on June 14, 2019 | root | parent | next [–] I often find the upvote to downvote ratio to be a higher sign of quality than purely the number of upvotes. If they showed me the ratio, I might still get the same value from it. --- notenoughbeans on June 14, 2019 | root | parent | next [–] The ratio is important too, but the vote count is important. I interpret 4 upvotes and 1 downvote much differently than 4000 upvotes and 1000 downvotes. --- lstamour on June 14, 2019 | root | parent | next [–] While I agree there isn’t much signal in 1:1 and 4:1, it’s been my experience that if a video gets a downvote that quickly, it probably isn’t as good as a video only attracting upvotes for educational, how-to, or technical content. --- hellbanner on June 14, 2019 | parent | prev | next [–] [Link](https://www.youtube.com/watch?v=jdaPJLJCK1M&t=6s) remember that anyone can manipulate these algorithms. --- merpnderp on June 14, 2019 | parent | prev | next [–] People aren't children needing information withheld from them. Give them the information and let them make up their own minds. This kind of coddling is how we ended up here in the first place. --- rossdavidh on June 14, 2019 | root | parent | next [–] Except, you know, for the ones who are children, who are on YouTube a lot. --- pier25 on June 14, 2019 | root | parent | prev | next [–] > Give them the information and let them make up their own minds. That only works for rational people. --- Nasrudith on June 14, 2019 | root | parent | next [–] And if you forever treat people as irrational actors they will never grow to be rational ones. --- pier25 on June 14, 2019 | root | parent | next [–] That's not really the problem though, but that irrational people have power. Like voting or spreading fake information. Also, I'd say people turn rational or irrational on their own choices. --- bubblewrap on June 15, 2019 | parent | prev | next [–] I would expect most of the time the counts actually are a pretty good indicator. There may be good content that is overlooked, but if something is successful, it probably has something going for it. --- CountHackulus on June 14, 2019 | prev | next [–] To quote Jim Sterling, YouTube has a YouTube problem. --- Buge on June 15, 2019 | prev | next [–] > PewDiePie, a skinny, fast-talking Swede Is he really fast-talking? He seems kind of slow talking to me, when I watch his videos I use 2x speed. --- dredmorbius on June 15, 2019 | parent | next [–] "Fast talker" is not merely descriptive but idiomatic: [Link 1](https://idioms.thefreedictionary.com/fast+talker) [Link 2](https://www.logicallyfallacious.com/tools/lp/Bo/LogicalFalla...) Cf: fast and loose: [Link 3](https://idioms.thefreedictionary.com/play+fast+and+loose+wit...) --- not_that_noob on June 15, 2019 | prev | next [–] YouTube also leads you down radical rabbit holes as that keeps the algorithm happy. How many of the recent terror attacks (ISIS or New Zealand type incidents) were fostered by YouTube watching? --- bubblewrap on June 15, 2019 | parent | next [–] My guess would be zero. --- wyoh on June 14, 2019 | prev | next [2 more] --- alt_f4 on June 15, 2019 | prev | next [–] yet another NYT anti-tech hit piece --- annadane on June 14, 2019 [flagged] | prev | next [19 more] --- ilikehurdles on June 14, 2019 | prev | next [–] Google absolutely can do all of those things without an algorithm. What they can't do is accomplish that without impacting profit margins (or at the minimum, executive bonuses). "If it impacts business as usual, then it is impossible" is a naive/flawed/libertarian stance. --- Looks like there’s a problem with this site [Link](https://chatgpt.com/?q=Ihttps://www.blogger.com/blog/post/edit/5087909448168436051/10490453279385047282019m on page “Post: Edit” with “wyoh on June 14, 2019 | prev | next [2 more]%0A%0A%09%0Aalt_f4 on June 15, 2019 | prev | next [–]%0A%0Ayet another NYT anti-tech hit piece%0A%0A%09%0Aannadane on June 14, 2019 [flagged] | prev | next [19 more]%0A%0A%09%0Ailikehurdles on June 14, 2019 | prev | next [–]%0A%0AGoogle absolutely can do all of those things without an algorithm. What they can't do is accomplish that without impacting profit margins (or at the minimum, executive bonuses). "If it impacts business as usual, then it is impossible" is a naive/flawed/libertarian stance.%0A%0A%09%0Axondono on June 14, 2019 | parent | next [–]%0A%0AYou do realize that to cover current needs (400h uploaded every minute), YouTube would need to employ more than 72000 people working full time right?%0A%0A%09%0Aemmp on June 14, 2019 | root | parent | next [–]%0A%0AAnd these people would inevitably make some number of mistakes in categorization too, or miss something, or just be unable to quite hit some baseline universal standard that doesn't upset a group. Then YouTube still gets the bad press.%0A%0A%09%0Abjourne on June 14, 2019 | root | parent | prev | next [–]%0A%0ABut 99.9% of all videos uploaded never gets more than a few handfuls of views so those are irrelevant. Of the remaining 0.1%, you don't need to watch every second of every frame - speeding it through at twice the speed should be doable. So by your own calculations, 72 000 * 0.001 * 0.5 = 36 people working full time.%0A%0A%09%0Axondono on June 15, 2019 | root | parent | next [–]%0A%0AYou can set that 0.001 factor as big or as low as you like, but then wehttps://www.blogger.com/blog/post/edit/5087909448168436051/10490453279385047282019d get the same nytimes hit piece saying this is intentionally being done by humans.%0A%0A%09%0Athrowaway287391 on June 14, 2019 | parent | prev | next [–]%0A%0AYou made me curious so I did some back-of-the-envelope math. An average of 576K hours of video is uploaded to YouTube every day [1], which is 4.032M hours per week. If the reviewers watch all the video at 1x speed and work 40 hours per week, you'd need about 100K reviewers to do the job. (This is just to watch the video -- not including any additional work done to annotate the video with whatever information you want out of your reviewers.) If each one costs $30K a year (probably a lowball estimate including salary, insurance, etc.) it would cost a total of $3B per year. YouTube makes $4B in revenue per year and roughly zero profit AFAICT, so there's no way this is feasible.%0A[1] https://www.quora.com/How-many-videos-are-uploaded-on-YouTub...%0A%0A%0A%09%0Ascarface74 on June 14, 2019 | parent | prev | next [–]%0A%0AIhttps://www.blogger.com/blog/post/edit/5087909448168436051/10490453279385047282019m usually a proponent of the “wall garden” when it comes to applications and strict sandboxing for most users, since software can harm your computer.%0ABut in the case of YouTube, there is absolutely no way that they can curate it and it still being as open as it is.%0A%0A%0A%09%0Ailikehurdles on June 14, 2019 | root | parent | next [–]%0A%0AThere is no need to curate every video, only the ones qualified enough to be recommended/showcased to the public who is not explicitly looking for them.%0A%0A%09%0Apatorjk on June 14, 2019 | root | parent | next [–]%0A%0ASay I watch a video on a topic like "nes video game speed running". Right now I'd see other nes video game speed running videos, it's very useful. In a curated world, what would be recommended? It's probably too much of a niche topic to yield results that would be very useful.%0A%0A%09%0Absder on June 14, 2019 | root | parent | prev | next [–]%0A%0A> But in the case of YouTube, there is absolutely no way that they can curate it and it still being as open as it is.%0ASo?%0A%0AIf YouTube exits the space and allows oxygen back into the video sharing market, we might actually get some different video sharing services that do different things (a la NicoNicoDouga).%0A%0A%0A%09%0Ascarface74 on June 14, 2019 | root | parent | next [–]%0A%0AVideo streaming, processing, and storage at scale still costs a lot of money. I donhttps://www.blogger.com/blog/post/edit/5087909448168436051/10490453279385047282019t think even Google is doing it profitably.%0A%0A%09%0Aphysics515 on June 14, 2019 | parent | prev | next [–]%0A%0AYouTube does human curation already. They are referred to as "playlist" and every user has the ability to create and share them. So what you are asking for is Google to create their own playlist? Would this also entail removing that ability from other users?%0A%0A%09%0Aemilfihlman on June 15, 2019 | prev | next [–]%0A%0AI mean PewDiePie's info is rather public but what's with the need to "dox" him right in the beginning?%0A%0A%09%0A0815test on June 14, 2019 | prev | next [–]%0A%0AQuite true, but let's not pretend that Twittr, Tumbler and Fakebook aren't also "incenting" all sorts of distorted behaviors of their own! These sites are "algorithms" all the same, even if the workings of these algorithms are in some ways more transparent. We need open and widespread federation via technologies like Mastodon, Matrix, and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case.%0A%0A%09%0Ankozyra on June 14, 2019 | parent | next [–]%0A%0A> We need open and widespread federation via technologies like Mastodon, Matrix, and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case.%0AThis always sounds good, but decentralized is nearly impossible to commoditize or make appealing to the general public. Outside of evangelism and word-of-mouth, how are people going to escape the Youtube advertising budget and instead choose - en masse - the product that is better for their privacy?%0A%0AThere's just so much money and inertia to fight.%0A%0A%0A%09%0Aswiley on June 14, 2019 | root | parent | next [–]%0A%0AYouTube removing harmless content over copyright etc is one way.%0A%0A%09%0Aphreeza on June 14, 2019 | parent | prev | next [–]%0A%0AIf the ranking algorithm is open for all to see, won't that encourage even worse gaming of the system? I am trying to think of comparable situations in existing open systems, but none come to mind.%0A%0A%09%0Athephyber on June 14, 2019 | parent | prev | next [–]%0A%0A> We need open and widespread federation via technologies like Mastodon, Matrix, and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another%0AWe already have them, yet FB, IG, Twitter, YT are the social media behemoths.%0A%0AAre you making a plea for the average internet person to care about the values of the platforms they use over the platform content? You are likely preaching to the choir here on HN, but I would guess that the audience here is only 1% of 1% of the audience you need to message.%0A%0A%0A%09%0ACorps make good use of psychological experiments to optimize their utility function. "Evil is efficient." The problem is that companies optimize for money without taking into account any other factor in any significant way.%0A%0A> In 1970, Nobel Prize–winning economist Milton Friedman published an essay in The New York Times Magazine titled “The Social Responsibility of Business Is to Increase Its Profits.” [1]%0A%0AArguably this quote incentivized the destruction of "good corporate citizenship" (although I admit it's possible that concept never existed in a broad sense).%0A%0A[1] https://www.newsweek.com/2017/04/14/harvard-business-school-...%0A%0A### Summary of Discussion%0A%0A#### **Call for Federation**%0A%0A* Advocacy for open, federated platforms like Mastodon, Matrix, and ActivityPub.%0A* Goal: Let users easily switch platforms if they dislike one algorithm.%0A%0A#### **Current Reality**%0A%0A* lstamour on June 14, 2019 | root | parent | next [–] I often find the upvote to downvote ratio to be a higher sign of quality than purely the number of upvotes. If they showed me the ratio, I still might get the same value from it. notenoughbeans on June 14, 2019 | root | parent | next [–] The ratio is important too, but the vote count is important. I interpret 4 upvotes and 1 downvote much differently than 4000 upvotes and 1000 downvotes. lstamour on June 14, 2019 | root | parent | next [–] While I agree there isn’t much signal in 1:1 and 4:1, it’s been my experience that if a video gets a downvote that quickly, it probably isn’t as good as a video only attracting upvotes for educational, howto or technical content. hellbanner on June 14, 2019 | parent | prev | next [–] https://www.youtube.com/watch?v=jdaPJLJCK1M&t=6s remember that anyone can manipulate these algorithms merpnderp on June 14, 2019 | parent | prev | next [–] People aren't children needing information withheld from them. Give them the information and let them make up their own minds. This kind of coddling is how we ended up here in the first place. rossdavidh on June 14, 2019 | root | parent | next [–] Except, you know, for the ones who are children, who are on YouTube a lot. pier25 on June 14, 2019 | root | parent | prev | next [–] > Give them the information and let them make up their own minds. That only works for rational people. Nasrudith on June 14, 2019 | root | parent | next [–] And if you forever treat people as irrational actors they will never grow to be rational ones. pier25 on June 14, 2019 | root | parent | next [–] That's not really the problem though, but that irrational people have power. Like voting or spreading fake information. Also, I'd say people turn rational or irrational on their own choices. bubblewrap on June 15, 2019 | parent | prev | next [–] I would expect most of the time the counts actually are a pretty good indicator. There may be good content that is overlooked, but if something is successful, it probably has something going for it. CountHackulus on June 14, 2019 | prev | next [–] To quote Jim Sterling, YouTube has a YouTube problem. Buge on June 15, 2019 | prev | next [–] > PewDiePie, a skinny, fast-talking Swede Is he really fast-talking? He seems kind of slow talking to me, when I watch his videos I use 2x speed. dredmorbius on June 15, 2019 | parent | next [–] "Fast talker" is not merely descriptive but idiomatic: https://idioms.thefreedictionary.com/fast+talker https://www.logicallyfallacious.com/tools/lp/Bo/LogicalFalla... Cf: fast and loose: https://idioms.thefreedictionary.com/play+fast+and+loose+wit... not_that_noob on June 15, 2019 | prev | next [–] YouTube also leads you down radical rabbit holes as that keeps the algorithm happy. How many of the recent terror attacks (ISIS or New Zealand type incidents) were fostered by YouTube watching? bubblewrap on June 15, 2019 | parent | next [–] My guess would be zero. wyoh on June 14, 2019 | prev | next [2 more] alt_f4 on June 15, 2019 | prev | next [–] yet another NYT anti-tech hit piece annadane on June 14, 2019 [flagged] | prev | next [19 more] ilikehurdles on June 14, 2019 | prev | next [–] Google absolutely can do all of those things without an algorithm. What they can't do is accomplish that without impacting profit margins (or at the minimum, executive bonuses). "If it impacts business as usual, then it is impossible" is a naive/flawed/libertarian stance. Looks like there’s a problem with this site https://chatgpt.com/?q=I%u2019m on page “Post: Edit” with “wyoh on June 14, 2019 | prev | next [2 more]%0A%0A%09%0Aalt_f4 on June 15, 2019 | prev | next [–]%0A%0Ayet another NYT anti-tech hit piece%0A%0A%09%0Aannadane on June 14, 2019 [flagged] | prev | next [19 more]%0A%0A%09%0Ailikehurdles on June 14, 2019 | prev | next [–]%0A%0AGoogle absolutely can do all of those things without an algorithm. What they can't do is accomplish that without impacting profit margins (or at the minimum, executive bonuses). "If it impacts business as usual, then it is impossible" is a naive/flawed/libertarian stance.%0A%0A%09%0Axondono on June 14, 2019 | parent | next [–]%0A%0AYou do realize that to cover current needs (400h uploaded every minute), YouTube would need to employ more than 72000 people working full time right?%0A%0A%09%0Aemmp on June 14, 2019 | root | parent | next [–]%0A%0AAnd these people would inevitably make some number of mistakes in categorization too, or miss something, or just be unable to quite hit some baseline universal standard that doesn't upset a group. Then YouTube still gets the bad press.%0A%0A%09%0Abjourne on June 14, 2019 | root | parent | prev | next [–]%0A%0ABut 99.9% of all videos uploaded never gets more than a few handfuls of views so those are irrelevant. Of the remaining 0.1%, you don't need to watch every second of every frame - speeding it through at twice the speed should be doable. So by your own calculations, 72 000 * 0.001 * 0.5 = 36 people working full time.%0A%0A%09%0Axondono on June 15, 2019 | root | parent | next [–]%0A%0AYou can set that 0.001 factor as big or as low as you like, but then we%u2019d get the same nytimes hit piece saying this is intentionally being done by humans.%0A%0A%09%0Athrowaway287391 on June 14, 2019 | parent | prev | next [–]%0A%0AYou made me curious so I did some back-of-the-envelope math. An average of 576K hours of video is uploaded to YouTube every day [1], which is 4.032M hours per week. If the reviewers watch all the video at 1x speed and work 40 hours per week, you'd need about 100K reviewers to do the job. (This is just to watch the video -- not including any additional work done to annotate the video with whatever information you want out of your reviewers.) If each one costs $30K a year (probably a lowball estimate including salary, insurance, etc.) it would cost a total of $3B per year. YouTube makes $4B in revenue per year and roughly zero profit AFAICT, so there's no way this is feasible.%0A[1] https://www.quora.com/How-many-videos-are-uploaded-on-YouTub...%0A%0A%0A%09%0Ascarface74 on June 14, 2019 | parent | prev | next [–]%0A%0AI%u2019m usually a proponent of the “wall garden” when it comes to applications and strict sandboxing for most users, since software can harm your computer.%0ABut in the case of YouTube, there is absolutely no way that they can curate it and it still being as open as it is.%0A%0A%0A%09%0Ailikehurdles on June 14, 2019 | root | parent | next [–]%0A%0AThere is no need to curate every video, only the ones qualified enough to be recommended/showcased to the public who is not explicily looking for them.%0A%0A%09%0Apatorjk on June 14, 2019 | root | parent | next [–]%0A%0ASay I watch a video on a topic like "nes video game speed running". Right now I'd see other nes video game speed running videos, it's very useful. In a curated world, what would be recommended? It's probably too much of a niche topic to yield results that would be very useful.%0A%0A%09%0Absder on June 14, 2019 | root | parent | prev | next [–]%0A%0A> But in the case of YouTube, there is absolutely no way that they can curate it and it still being as open as it is.%0ASo?%0A%0AIf YouTube exits the space and allows oxygen back into the video sharing market, we might actually get some different video sharing services that do different things (a la NicoNicoDouga).%0A%0A%0A%09%0Ascarface74 on June 14, 2019 | root | parent | next [–]%0A%0AVideo streaming, processing, and storage at scale still costs a lot of money. I don%u2019t think even Google is doing it profitably.%0A%0A%09%0Aphysics515 on June 14, 2019 | parent | prev | next [–]%0A%0AYouTube does human curation already. They are refered to as "playlist" and every user has the ability to create and share them. So what you are asking for is Google to create their own playlist? Would this also entail removing that ability from other users?%0A%0A%09%0Aemilfihlman on June 15, 2019 | prev | next [–]%0A%0AI mean PewDiePie's info is rather public but what's with the need to "dox" him right in the beginning?%0A%0A%09%0A0815test on June 14, 2019 | prev | next [–]%0A%0AQuite true, but let's not pretend that Twittr, Tumbler and Fakebook aren't also "incenting" all sorts of distorted behaviors of their own! These sites are "algorithms" all the same, even if the workings of these algorithms are in some ways more transparent. We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case.%0A%0A%09%0Ankozyra on June 14, 2019 | parent | next [–]%0A%0A> We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case.%0AThis always sounds good, but decentralized is nearly impossible to commoditize or make appealing to the general public. Outside of evangelism and word-of-mouth, how are people going to escape the Youtube advertising budget and instead choose - en masse - the product that is better for their privacy?%0A%0AThere's just so much money and inertia to fight.%0A%0A%0A%09%0Aswiley on June 14, 2019 | root | parent | next [–]%0A%0AYouTube removing harmless content over copyright etc is one way.%0A%0A%09%0Aphreeza on June 14, 2019 | parent | prev | next [–]%0A%0AIf the ranking algorithm is open for all to see, won't that encourage even worse gaming of the system? I am trying to think of comparable situations in existing open systems, but none come to mind.%0A%0A%09%0Athephyber on June 14, 2019 | parent | prev | next [–]%0A%0A> We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another%0AWe already have them, yet FB, IG, Twitter, YT are the social media behemoths.%0A%0AAre you making a plea for the average internet person to care about the values of the platforms they use over the platform content? You are likely preaching to the choir here on HN, but I would guess that the audience here is only 1% of 1% of the audience you need to message.%0A%0ACorps make good use of psychological experiments to optimize their utility function. "Evil is efficient." The problem is that companies optimize for money without taking into account any other factor in any significant way.%0A%0A> In 1970, Nobel Prize–winning economist Milton Friedman published an essay in The New York Times Magazine titled “The Social Responsibility of Business Is to Increase Its Profits.” [1]%0A%0AArguably this quote incentivized the destruction of "good corporate citizenship" (although I admit it's possible that concept never existed in a broad sense).%0A%0A[1] https://www.newsweek.com/2017/04/14/harvard-business-school-...%0A%0A### Summary of Discussion%0A%0A#### **Call for Federation**%0A%0A* Advocacy for open, federated platforms like Mastodon, Matrix, and ActivityPub.%0A* Goal: Let users easily switch platforms if they dislike one algorithm.%0A%0A#### **Current Reality**%0A%0A*Here is the translation of the provided text to English: --- me, even if the workings of these algorithms are in some ways more transparent. We need open and widespread federation via technologies like Mastodon, Matrix, and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case. ankozyra on June 14, 2019 | parent | next [–] > We need open and widespread federation via technologies like Mastodon, Matrix, and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case. This always sounds good, but decentralized is nearly impossible to commoditize or make appealing to the general public. Outside of evangelism and word-of-mouth, how are people going to escape the YouTube advertising budget and instead choose - en masse - the product that is better for their privacy? There's just so much money and inertia to fight. swiley on June 14, 2019 | root | parent | next [–] YouTube removing harmless content over copyright etc is one way. phreeza on June 14, 2019 | parent | prev | next [–] If the ranking algorithm is open for all to see, won't that encourage even worse gaming of the system? I am trying to think of comparable situations in existing open systems, but none come to mind. thephyber on June 14, 2019 | parent | prev | next [–] > We need open and widespread federation via technologies like Mastodon, Matrix, and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another We already have them, yet FB, IG, Twitter, YT are the social media behemoths. Are you making a plea for the average internet person to care about the values of the platforms they use over the platform content? You are likely preaching to the choir here on HN, but I would guess that the audience here is only 1% of 1% of the audience you need to message. Corps make good use of psychological experiments to optimize their utility function. "Evil is efficient." The problem is that companies optimize for money without taking into account any other factor in any significant way. > In 1970, Nobel Prize–winning economist Milton Friedman published an essay in The New York Times Magazine titled “The Social Responsibility of Business Is to Increase Its Profits.” [1] Arguably this quote incentivized the destruction of "good corporate citizenship" (although I admit it's possible that concept never existed in a broad sense). [1] https://www.newsweek.com/2017/04/14/harvard-business-school-... ### Summary of Discussion #### **Call for Federation** * Advocacy for open, federated platforms like Mastodon, Matrix, and ActivityPub. * Goal: Let users easily switch platforms if they dislike one algorithm. #### **Current Reality** * Despite federation tech existing, centralized platforms (FB, IG, Twitter, YT) dominate. #### **Audience Concern** * Poster questions if the message is aimed at everyday users or just the tech-savvy. * HN community is niche—far from the mass audience needed for change. #### **Corporate Incentives** * Corporations optimize for profit using psychological tactics. * "Evil is efficient" — optimizing without ethical constraints works well. #### **Philosophical Note** * Reference to Milton Friedman's 1970 essay asserting profit as the sole responsibility of business. * Implication: This mindset may have undermined ethical corporate behavior. tqi on June 14, 2019 | parent | prev | next [–] I think the author's issue is not that her recommendations are bad, but that other people are getting recommendations for things she disagrees with (ie conspiracy theory videos, child-unsafe content, etc). So I don't think she would view decentralization as a win. jiveturkey on June 14, 2019 | prev [–] Wow, I'm conflicted. First, an obvious idiot statement, which helps us ground our analysis: > Human intuition can recognize motives in people's viewing decisions, and can step in to discourage that — which most likely would have happened if videos were being recommended by humans, and not a computer. But to YouTube's nuance-blind algorithm — trained to think with simple logic — serving up more videos to sate a sadist's appetite is a job well done. So this person is advocating that a human (ie, another human besides oneself, an employee at YouTube), have access to the click stream of individual users? This proposal, in 2019??? Of course this would have to be compulsory to be effective. Why would I want a megacorp to be making moral decisions for me? I'm ok with them making amoral algorithmic decisions. The author is generalizing the problem of YT Kids, which should be human curated, to all of YouTube. OTOH, yeah feeding our worst impulses is kind of a problem. But, tweaking the algorithm isn't the solution. The machine itself is designed to thrive on attention. --- The rest of the text appears to be HTML code, which doesn't contain meaningful content to translate. If you need assistance with something specific in the HTML or any other part, please let me know! Despite federation tech existing, centralized platforms (FB, IG, Twitter, YT) dominate.%0A%0A#### **Audience Concern**%0A%0A* Poster questions if the message is aimed at everyday users or just the tech-savvy.%0A* HN community is niche—far from the mass audience needed for change.%0A%0A#### **Corporate Incentives**%0A%0A* Corporations optimize for profit using psychological tactics.%0A* "Evil is efficient" — optimizing without ethical constraints works well.%0A%0A#### **Philosophical Note**%0A%0A* Reference to Milton Friedman%u2019s 1970 essay asserting profit as the sole responsibility of business.%0A* Implication: This mindset may have undermined ethical corporate behavior.%0A%0A%09%0Atqi on June 14, 2019 | parent | prev | next [–]%0A%0AI think the author's issue is not that her recommendations are bad, but that other people are getting recommendations for things she disagrees with (ie conspiracy theory videos, child-unsafe content, etc). So I don't think she would view decentralization as a win.%0A%0A%09%0Ajiveturkey on June 14, 2019 | prev [–]%0A%0AWow, I'm conflicted. First, an obvious idiot statement, which helps us ground our analysis:%0A> Human intuition can recognize motives in people%u2019s viewing decisions, and can step in to discourage that — which most likely would have happened if videos were being recommended by humans, and not a computer. But to YouTube%u2019s nuance-blind algorithm — trained to think with simple logic — serving up more videos to sate a sadist%u2019s appetite is a job well done.%0A%0ASo this person is advocating that a human (ie, another human besides oneself, an employee at youtube), have access to the click stream of individual users? This proposal, in 2019??? Of course this would have to be compulsory to be effective. Why would I want a megacorp to be making moral decisions for me? I'm ok with them making amoral algorithmic decisions.%0A%0AThe author is generalizing the problem of YT Kids, which should be human curated, to all of youtube.%0A%0AOTOH, yeah feeding our worst impulses is kind of a problem. But, tweaking the algorithm isn't the solution. The machine itself is designed to thrive on attention.%0AsRGB Intent: 0%0AX PixelsHere is the translation of the provided text to English:%0A%0A---%0A%0APer Meter: 3,780%0AY Pixels Per Meter: 3,780%0A%0A---%0A%0AThe rest of the text appears to be HTML code, which doesn't contain meaningful content to translate. If you need assistance with something specific in the HTML or any other part, please let me know! style="float: right;">


rn, networked and increasingly polarised world. It is also important to underline from the outset that, while much of the contemporary furor about mis-information has focused on its political varieties, ‘information pollution%u20196 contaminates public discourse on a range of issues. For example, medical mis-information has always posed a worldwide threat to health, and research has demonstrated how incorrect treatment advice is perpetuated through spoken rumours7 , tweets8 , Google results9 and Pinterest boards10 . Furthermore, in the realm of climate change, a recent study examined the impact of exposure to climate-related conspiracy theories. It found that exposure to such theories created a sense of powerlessness, resulting in disengagement fro” selected.%0A%0APlease proofread the selection for spelling and grammar errors. Identify any mistakes and provide a corrected version of the text. Maintain the meaning and factual accuracy and output the list of proposed corrections first, followed by the final, corrected version of the text. sent back an error. Error code: 431 Request Header Fields Too Large Check to make sure you’ve typed the website address correctly. xondono on June 14, 2019 | parent | next [–] You do realize that to cover current needs (400h uploaded every minute), YouTube would need to employ more than 72000 people working full time right? emmp on June 14, 2019 | root | parent | next [–] And these people would inevitably make some number of mistakes in categorization too, or miss something, or just be unable to quite hit some baseline universal standard that doesn't upset a group. Then YouTube still gets the bad press. bjourne on June 14, 2019 | root | parent | prev | next [–] But 99.9% of all videos uploaded never gets more than a few handfuls of views so those are irrelevant. Of the remaining 0.1%, you don't need to watch every second of every frame - speeding it through at twice the speed should be doable. So by your own calculations, 72 000 * 0.001 * 0.5 = 36 people working full time. xondono on June 15, 2019 | root | parent | next [–] You can set that 0.001 factor as big or as low as you like, but then we’d get the same nytimes hit piece saying this is intentionally being done by humans. throwaway287391 on June 14, 2019 | parent | prev | next [–] You made me curious so I did some back-of-the-envelope math. An average of 576K hours of video is uploaded to YouTube every day [1], which is 4.032M hours per week. If the reviewers watch all the video at 1x speed and work 40 hours per week, you'd need about 100K reviewers to do the job. (This is just to watch the video -- not including any additional work done to annotate the video with whatever information you want out of your reviewers.) If each one costs $30K a year (probably a lowball estimate including salary, insurance, etc.) it would cost a total of $3B per year. YouTube makes $4B in revenue per year and roughly zero profit AFAICT, so there's no way this is feasible. [1] https://www.quora.com/How-many-videos-are-uploaded-on-YouTub... scarface74 on June 14, 2019 | parent | prev | next [–] I’m usually a proponent of the “wall garden” when it comes to applications and strict sandboxing for most users, since software can harm your computer. But in the case of YouTube, there is absolutely no way that they can curate it and it still being as open as it is. ilikehurdles on June 14, 2019 | root | parent | next [–] There is no need to curate every video, only the ones qualified enough to be recommended/showcased to the public who is not explicily looking for them. patorjk on June 14, 2019 | root | parent | next [–] Say I watch a video on a topic like "nes video game speed running". Right now I'd see other nes video game speed running videos, it's very useful. In a curated world, what would be recommended? It's probably too much of a niche topic to yield results that would be very useful. bsder on June 14, 2019 | root | parent | prev | next [–] > But in the case of YouTube, there is absolutely no way that they can curate it and it still being as open as it is. So? If YouTube exits the space and allows oxygen back into the video sharing market, we might actually get some different video sharing services that do different things (a la NicoNicoDouga). scarface74 on June 14, 2019 | root | parent | next [–] Video streaming, processing, and storage at scale still costs a lot of money. I don’t think even Google is doing it profitably. physics515 on June 14, 2019 | parent | prev | next [–] YouTube does human curation already. They are refered to as "playlist" and every user has the ability to create and share them. So what you are asking for is Google to create their own playlist? Would this also entail removing that ability from other users? emilfihlman on June 15, 2019 | prev | next [–] I mean PewDiePie's info is rather public but what's with the need to "dox" him right in the beginning? 0815test on June 14, 2019 | prev | next [–] Quite true, but let's not pretend that Twittr, Tumbler and Fakebook aren't also "incenting" all sorts of distorted behaviors of their own! These sites are "algorithms" all the same, even if the workings of these algorithms are in some ways more transparent. We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case. nkozyra on June 14, 2019 | parent | next [–] > We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case. This always sounds good, but decentralized is nearly impossible to commoditize or make appealing to the general public. Outside of evangelism and word-of-mouth, how are people going to escape the Youtube advertising budget and instead choose - en masse - the product that is better for their privacy? There's just so much money and inertia to fight. swiley on June 14, 2019 | root | parent | next [–] YouTube removing harmless content over copyright etc is one way. phreeza on June 14, 2019 | parent | prev | next [–] If the ranking algorithm is open for all to see, won't that encourage even worse gaming of the system? I am trying to think of comparable situations in existing open systems, but none come to mind. thephyber on June 14, 2019 | parent | prev | next [–] > We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another We already have them, yet FB, IG, Twitter, YT are the social media behemoths. Are you making a plea for the average internet person to care about the values of the platforms they use over the platform content? You are likely preaching to the choir here on HN, but I would guess that the audience here is only 1% of 1% of the audience you need to message. Corps make good use of psychological experiments to optimize their utility function. "Evil is efficient." The problem is that companies optimize for money without taking into account any other factor in any significant way. > In 1970, Nobel Prize–winning economist Milton Friedman published an essay in The New York Times Magazine titled “The Social Responsibility of Business Is to Increase Its Profits.” [1] Arguably this quote incentivized the destruction of "good corporate citizenship" (although I admit it's possible that concept never existed in a broad sense). [1] https://www.newsweek.com/2017/04/14/harvard-business-school-... ### Summary of Discussion #### **Call for Federation** * Advocacy for open, federated platforms like Mastodon, Matrix, and ActivityPub. * Goal: Let users easily switch platforms if they dislike one algorithm. #### **Current Reality** * Despite federation tech existing, centralized platforms (FB, IG, Twitter, YT) dominate. #### **Audience Concern** * Poster questions if the message is aimed at everyday users or just the tech-savvy. * HN community is niche—far from the mass audience needed for change. #### **Corporate Incentives** * Corporations optimize for profit using psychological tactics. * "Evil is efficient" — optimizing without ethical constraints works well. #### **Philosophical Note** * Reference to Milton Friedman’s 1970 essay asserting profit as the sole responsibility of business. * Implication: This mindset may have undermined ethical corporate behavior. tqi on June 14, 2019 | parent | prev | next [–] I think the author's issue is not that her recommendations are bad, but that other people are getting recommendations for things she disagrees with (ie conspiracy theory videos, child-unsafe content, etc). So I don't think she would view decentralization as a win. jiveturkey on June 14, 2019 | prev [–] Wow, I'm conflicted. First, an obvious idiot statement, which helps us ground our analysis: > Human intuition can recognize motives in people’s viewing decisions, and can step in to discourage that — which most likely would have happened if videos were being recommended by humans, and not a computer. But to YouTube’s nuance-blind algorithm — trained to think with simple logic — serving up more videos to sate a sadist’s appetite is a job well done. So this person is advocating that a human (ie, another human besides oneself, an employee at youtube), have access to the click stream of individual users? This proposal, in 2019??? Of course this would have to be compulsory to be effective. Why would I want a megacorp to be making moral decisions for me? I'm ok with them making amoral algorithmic decisions. The author is generalizing the problem of YT Kids, which should be human curated, to all of youtube. OTOH, yeah feeding our worst impulses is kind of a problem. But, tweaking the algorithm isn't the solution. The machine itself is designed to thrive on attention. sRGB Intent: 0 X PixelsHere is the translation of the provided text to English: --- Per Meter: 3,780 Y Pixels Per Meter: 3,780 --- The rest of the text appears to be HTML code, which doesn't contain meaningful content to translate. If you need assistance with something specific in the HTML or any other part, please let me know! style="float: right;">


rn, networked and increasingly polarised world. It is also important to underline from the outset that, while much of the contemporary furor about mis-information has focused on its political varieties, ‘information pollution’6 contaminates public discourse on a range of issues. For example, medical mis-information has always posed a worldwide threat to health, and research has demonstrated how incorrect treatment advice is perpetuated through spoken rumours7 , tweets8 , Google results9 and Pinterest boards10 . Furthermore, in the realm of climate change, a recent study examined the impact of exposure to climate-related conspiracy theories. It found that exposure to such theories created a sense of powerlessness, resulting in disengagement from politics and a reduced likelihood of people to make small changes that would reduce their carbon footprint.11 In this report, we hope to provide a framework for policy-makers, legislators, researchers, technologists and practitioners working on challenges related to mis-, dis- and malinformation—which together we call information disorder."Your query is too long: "Rumours, conspiracy theories and fabricated information are far from new.4 Politicians have forever made unrealistic promises during election campaigns. Corporations have always nudged people away from thinking about issues in particular ways. And the media has long disseminated misleading stories for their shock value. However, the complexity and scale of information pollution in our digitally-connected world presents an unprecedented challenge. While it is easy to dismiss the sudden focus on this issue because of the long and varied history of mis- and dis-information5 , we argue that there is an immediate need to seek workable solutions for the polluted information streams that are now characteristic of our modern, networked and increasingly polarised world. It is also important to underline from the outset that, while much of the contemporary furor about mis-information has focused on its political varieties, ‘information pollution’6 contaminates public discourse on a range of issues. For example, medical mis-information has always posed a worldwide threat to health, and research has demonstrated how incorrect treatment advice is perpetuated through spoken rumours7 , tweets8 , Google results9 and Pinterest boards10 . Furthermore, in the realm of climate change, a recent study examined the impact of exposure to climate-related conspiracy theories. It found that exposure to such theories created a sense of powerlessness, resulting in disengagement from politics and a reduced likelihood of people to make small changes that would reduce their carbon footprint.11 In this report, we hope to provide a framework for policy-makers, legislators, researchers, technologists and practitioners working on challenges related to mis-, dis- and malinformation—which together we call information disorder."Certainly! Here’s a corrected and polished version of your sentences:

Original:
The YouTuber logarithms are speaking to me. No one would think that you two would be a large language model.

Unable to connect to Ollama 🦙

Corrected:
The YouTuber algorithms are speaking to me. No one would think that you two are a large language model.WoI and Challenges to FoE -Evaporation of truth -Abuse of freedom of expression standards and protection mechanisms -De-legitimisation of and increasing mistrust in media -‘All or nothing’ approach of the international FoE organizations may lead to ‘nothing’ rather than ‘all’ Restricting weaponization of information as a form of expression is often justified on the grounds of national security/territorial integrity interehttp://127.0.0.1:11434st (not the least because of its “external” origin targeting sovereignty) Challenges: – any discourse, which can be described as weapoRay ID: 9461294e9edfa6d9nization of information, is usually also represented as an exercise of

Popular Posts