🦙

Your query is too long: "Rumours, conspiracy theories and fabricated information are far from new.4 Politicians have forever made unrealistic promises during election campaigns. Corporations have always nudged people away from thinking about issues in particular ways. And the media has long disseminated misleading stories for their shock value. However, the complexity and scale of information pollution in our digitally-connected world presents an unprecedented challenge. While it is easy to dismiss the sudden focus on this issue because of the long and varied history of mis- and dis-information5 , we argue that there is an immediate need to seek workable solutions for the polluted information streams that are now characteristic of our mode

Core Explanation Section Template

Paragraph 1: Identify the Medical Evidence

I am writing to address a significant misinterpretation of medical evidence in my case. Specifically, the medical report dated [DATE] from [DOCTOR'S NAME], [TITLE/SPECIALTY], [MEDICAL FACILITY] was incorrectly assessed in your decision dated [DECISION DATE]. This report, reference number [REFERENCE NUMBER if applicable], contains critical medical findings that were not accurately assessed in the decision-making process.

Paragraph 2: Explain What the Evidence Actually Demonstrates

The medical evidence clearly establishes that I have [SPECIFIC DIAGNOSIS, e.g., "chronic lumbar radiculopathy with L5-S1 nerve root compression"]. Dr. [NAME]'s assessment specifically states: "[DIRECT QUOTE FROM MEDICAL REPORT]." The medical documentation further indicates [SPECIFIC CLINICAL FINDINGS, e.g., "MRI findings show disc herniation at L5-S1 with impingement"], which directly supports [SPECIFIC ACCOMMODATION/BENEFIT/STATUS] that I am seeking. Due to [CONDITION], I am unable to [SPECIFIC FUNCTIONAL LIMITATIONS, e.g., "stand for more than 10 minutes, lift objects over 5 kilograms, or walk distances exceeding 200 meters without significant pain"].

Paragraph 3: Point Out the Discrepancy Professionally

Contrary to the agency's conclusion that [SPECIFIC AGENCY CONCLUSION, e.g., "I have no significant limitations"], Dr. [NAME]'s assessment indicates [CONTRASTING MEDICAL REALITY, e.g., "significant functional impairments that hinder basic occupational tasks"]. The decision did not consider [SPECIFIC OVERLOOKED EVIDENCE, e.g., "the MRI findings dated [DATE], which revealed disc herniation with nerve compression"]. This represents a fundamental misinterpretation of the medical evidence, as [KEY MEDICAL INFORMATION] was not accurately assessed during the review process.

Paragraph 4: Connect to Negative Decision Impact

This misinterpretation directly led to [SPECIFIC NEGATIVE OUTCOME, e.g., "the denial of essential physiotherapy sessions, exacerbating my condition and prolonging recovery"]. Had the medical evidence been correctly interpreted to show [CORRECT INTERPRETATION], the decision would have supported [DESIRED OUTCOME]. I request a reassessment of my case, considering the comprehensive medical evidence provided, to [SPECIFIC REQUEST, e.g., "approve the necessary treatment plan and disability accommodation"]. I look forward to your prompt response and am available for any further information required.


Customization Notes:

Replace bracketed placeholders with:

  • Exact medical diagnoses and terminology
  • Direct quotations from medical reports
  • Specific functional limitatiIn 20 cases, it makes it worse. Point of the negative is to say that if you have real medicine, if something really, it does work, and also has risks. Obviously, you cut open a person's body to remove a tumour, you can also kill them because you opened up their body. I would basically just guess that you go forward a few years, like, we're just gonna be talking to AI throughout the day, about different things that we're wondering, and, you know, it's like you'll, you you'll have your phone, you'll you'll talk to on your phone, you'll talk to it while you're browsing your feed apps. So it'll give you context about different stuff. You'll be able to answer questions. It'll help you as you're interacting with people and messaging apps. 03 has a very different skill set. It can think through problems really hard. You don't really want the model to think for five minutes when you say hi. And so, I think the real challenge facing us on post Trenee and research more broadly is like combining these capabilities. So, you know, training the model to be, just a really delightful chit chat partner, but also know when to, you know, reason. Talking about Sam Altman a little bit, you know he's building all of these social features in and kind of taking a step in this direction. Are you worried about that? I think competition is fantastic. I think competition forces everyone to step their game up. Whether that's deep seek, releasing their really cool research and their model gave everyone, you know, on this side of the Pacific, a little wake up call, and I think everyone has kind of stepped their game up a bit. It's the same at Chive, you know, there's been waves of competition. You know, we've definitely enjoyed periods of being number one, right? And then someone's come along and shown us one or two things, and then we found ourselves in that uncomfortable position of being number two and it keeps the team hungry, it keeps you searching for new ideas, pushing the boundaries, pushing the frontier. So I'm a big fan of competition. I think it's great. I think a lot of people forget, like, how big AI is and how big the space is. And I think, you can look at video and you can say there's a YouTube, there's a TikTok, there's a Netflix, there's Amazon Prime, there's Disney, there's Apple. I think that's what AI will look like. I think there will be so many multi hundred billion, trillion dollar businesses that are being created. So I wouldn't, you know, we don't get scared of competition, but instead, we want to challenge ourselves to be ahead of it. We started by asking, what happens when the line between human connection and artificial intimacy blurs?" Chai's journey gave us the early answers. millions flocks to AI simulators, imagination machines, seeking laughter, solace, companionship, and occasionally.. Sexting. So, the genie's out of the bottom. The message is unavoidable. AI is no longer just about answers. It's about us. It's about connections, simulated or otherwise. Chai proved the hunger was there, building a fiercely loyal user base. How can use new past comments, ask show jobs log in YouTube's algorithm incentivizes the wrong behaviour. New York The problem with pattern studting pattern formation is that patterns just keep repeating. But the brain is doing something that a pattern can't do, right? Which is it's processing information. I thought and I just became absolutely fascinated with that. I studied computational neuroscience, working with Alex Pet and Peter Latham and Wei G Ma. Your propian brain hypothesis. Yes. Why is that? I believe it is is the only right way to think about how the world works. You get some data, then you get some new data, and you sort of say, "Oh, how is it like the old data? And if it's similar enough, then you sort of lump them together, and then you sort you build theories and you properly testify pauses in the fashion. Let's imagine we took all of the humans on the planet and we described the using this contextual reward function that you're talking about. It seems to me that that object would be one of the most sort of complex artefacts in the universe. It's also the case that it's very difficult for me to figure out what you a word function actually is. A lot of work and behavioural economics has gone into trying to figure this out, but it turned out just measuring someone's reward function, which was prerequisite for getting an AI algorithm that shares our value. is really impossible, because you cannot disentangle belief and reward based on policy. If you don't know someone's beliefs, you cannot infer values. If you don't know their values, it's also very difficult to, like, disentangle their beliefs. So they disagree about a conclusion. They disagree about what action they're going to take. And if you don't want to resolve the argument, you of course, instantly draw one of two possible conclusions. The person that I'm arguing with is either evil or stupid. They're stupid because they're prediction ends, their beliefs are wrong, their prediction engine is wrong, or they're evil because they're reward function is not the same as mine. I'm Benajamamin Cousier, I'm studying an A research type Cabs, with is funded from best ventures in evolving Shinani. So we're a small group of highlyivated and hard working people and main threat that we are going to do is trying to make that reason affect and long term trying to do AGI research. It's one of the big advantage is because we're early going to be high freedom and a high impact as someons with measurable impacts
  • Precise dates, reference numbers, and doctor credentials
  • Clear contrast between agency conclusion and medical evidence
  • Specific overlooked evidence or test results

Enhanced Evidence Strategy:

  • Include exact quotes: "Dr. Smith noted: 'Patient exhibits persistent neuropathic pain unresponsive to conservative treatment'"
  • Specify functional impacts: "Unable to stand for more than 10 minutes or lift objects over 5 kilograms"
  • Reference specific tests: "MRI dated [DATE] revealed [SPECIFIC FINDINGS]"
  • State clear contrasts: "Contrary to agency's 'no limitations' finding, evidence shows significant impairments"

Maintain throughout:

  • Direct, unambiguous language
  • Professional, respectful tone
  • Evidence-based assertions
  • Clear action requests
  • Follow-up intentions

rn, networked and increasingly polarised world. It is also important to underline from the outset that, while much of the contemporary furor about mis-information has focused on its political varieties, ‘information pollution’6 contaminates public discourse on a range of issues. For example, medical mis-information has always posed a worldwide threat to health, and research has demonstrated how incorrect treatment advice is perpetuated through spoken rumours7 , tweets8 , Google results9 and Pinterest boards10 . Furthermore, in the realm of climate change, a recent study examined the impact of exposure to climate-related conspiracy theories. It found that exposure to such theories created a sense of powerlessness, resulting in disengagement from politics and a reduced likelihood of people to make small changes that would reduce their carbon footprint.11 In this report, we hope to provide a framework for policy-makers, legislators, researchers, technologists and practitioners working on challenges related to mis-, dis- and malinformation—which together we call information disorder."Your query is too long: "Rumours, conspiracy theories and fabricated information are far from new.4 Politicians have forever made unrealistic promises during election campaigns. Corporations have always nudged people away from thinking about issues in particular ways. And the media has long disseminated misleading stories for their shock value. However, the complexity and scale of information pollution in our digitally-connected world presents an unprecedented challenge. While it is easy to dismiss the sudden focus on this issue because of the long and varied history of mis- and dis-information5 , we argue that there is an immediate need to seek workable solutions for the polluted information streams that are now characteristic of our modern, networked and increasingly polarised world. It is also important to underline from the outset that, while much of the contemporary furor about mis-information has focused on its political varieties, ‘information pollution’6 contaminates public discourse on a range of issues. For example, medical mis-information has always posed a worldwide threat to health, and research has demonstrated how incorrect treatment advice is perpetuated through spoken rumours7 , tweets8 , Google results9 and Pinterest boards10 . Furthermore, in the realm of climate change, a recent study examined the impact of exposure to climate-related conspiracy theories. It found that exposure to such theories created a sense of powerlessness, resulting in disengagement from politics and a reduced likelihood of people to make small changes that would reduce their carbon footprint.11 In this report, we hope to provide a framework for policy-makers, legislators, researchers, technologists and practitioners working on challenges related to mis-, dis- and malinformation—which together we call information disorder."Certainly! Here’s a corrected and polished version of your sentences:

Original:
The YouTuber logarithms are speaking to me. No one would think that you two would be a large language model.

Unable to connect to Ollama 🦙

Corrected:
The YouTuber algorithms are speaking to me. No one would think that you two are a large language model.WoI and Challenges to FoE -Evaporation of truth -Abuse of freedom of expression standards and protection mechanisms -De-legitimisation of and increasing mistrust in media -‘All or nothing’ approach of the international FoE organizations may lead to ‘nothing’ rather than ‘all’ Restricting weaponization of information as a form of expression is often justified on the grounds of national security/territorial integrity interehttp://127.0.0.1:11434st (not the least because of its “external” origin targeting sovereignty) Challenges: – any discourse, which can be described as weapoRay ID: 9461294e9edfa6d9nization of information, is usually also represented as an exercise of free expression rights by the speaker/producer of such content, especially as most of such ‘speakers’ are de jure media actors (Russia Today, Sputnik etc.) – nature of modern communications often makes legal regulation of speech technically complicated or relatively easy to overcome (In March 2019, Facebook demanded state regulation on harmful content, privacy, protection of elections and data portability) – States’ anxieties about the loss of control and threats to “information sovereignty” – Professor Monroe Price predicted that trend years ago – “Globalisation” of challenges but still rather regional/national responses and pressing jurisdiction issues – how to prevent abuse of freedom without rolling back into the tenets of censorship? – “online” i

Hacker News new | past | comments | ask | show | jobs | submitlogin

YouTube’s Algorithm Incentivizes the Wrong Behavior (nytimes.com)

203 points by furcyd on June 14, 2019 | hide | past | favorite | 251 comments



"If YouTube won’t remove the algorithm, it must, at the very least, make significant changes, and have greater human involvement in the recommendation process.", man does this person know how many videos and how many users YouTube has? They cannot use anything except an algorithm to recommend videos. They cannot use anything except an algorithm to detect videos inappropriate for children. It seems YouTube is working on this, and this opinion seems like a ill thought out fluff piece to enrage readers and sell this persons book.


> They cannot use anything except an algorithm to recommend videos.

I agree that with the current business model it is not possible for YouTube to sort it manually.

When I was a kid, a long long time ago, it would have been impossible to conceive that a TV channel showed that kind of content regularly and continue open. If their answer would have been that they cannot fix it because it costs money there would have been an outraged response.

If YouTube cannot keep things legal, cannot respect people rights, cannot be a good responsible part of society because it is not cost effective for me the way to go is clear. And that is true for YouTube, Facebook or any other business digital or not.


Youtube is not a TV channel, it's a video crowdsourced sharing site.

If we want to have a "free" (as in no subscription and no money required to be payed for the service) video sharing/uploading site, what model would that make it work and still have human reviewing? I consider the fact that there may be undesirable videos as the cost of having such a site, similarly how to the "cost" of having a free Internet is that there's going to be lots of hate online and free access to tutorials to make bombs and what not. It's part of the deal and I'm happy with that, YMMV. If you worry about what kids might access then don't let them access Youtube but please don't create laws that would make free video sharing sites illegal/impossible to run.

This is true for pretty much any free Internet service that allows for user content. If all of Internet content production will go back to just "official" creators (because they are the only ones where the cost/benefit math would make sense) I think that would be a huge loss/regression over what we have gained since the age of the Internet.


When I was a kid in the 80s, cartoons were basically 30 minute toy commercials. My toddler loves watching videos on YouTube of Russian kids playing with toys, so I guess things haven’t changed much.


How about actually demonstrating harm to children (or to anyone else) before launching a moral panic?

Is that an option?


I’d say having a 13 year old far right YouTube star post a video threatening to kill the CEO might be harmful, but maybe that’s ok?

https://www.newstatesman.com/science-tech/social-media/2019/...


Do you seriously think that kid was radicalized on YouTube? Where were the parents?


2018: “I’ll pick a topic and just give my opinion about it try to be entertaining, try to be funny, try to be unique and say something other people haven’t said before,” youtuber said.

https://redwoodbark.org/46876/culture/redwood-students-view-...

2019:

In response, the principal of the high school sent a note to students and parents Thursday night regarding the "hate-based video and text posts attributed to one of our students":

https://www.kron4.com/news/bay-area/bay-area-girl-says-she-l...


I would think having humans more involved in training the algorithm could scale much better.

Also, detecting videos that are inappropriate for children is a lot harder than determining certain content creators that are trustworthy to post videos that are appropriate (and to tag them correctly). That can be learned from the user's history, how many times their stuff has been flagged, getting upvotes from users that are themselves deemed credible, and so on. The more layers of indirection, the better, a la PageRank.

So even without analyzing the video itself, it would have a much smaller set of videos it can recommend from, but still potentially millions of videos. You still need some level of staff to train the algorithm, but you don't have to have paid staff look at every single video to have a good set of videos it can recommend. The staff might spend most of their time looking at videos that are anomalous, such as they were posted by a user the algorithm trusted but then flagged by a user that the algorithm considered credible. Then they would tag that video with some rich information that will help the algorithm in the future, beyond just removing that video or reducing the trust of the poster or the credibility of the flagger.


The trouble with depending on user flags is that it creates opportunities for blackmail.

https://www.theverge.com/2019/2/11/18220032/youtube-copystri...


The algorithm works really damn well for 99.999% of the cases. It manages to show me great recommendations from very niche things I'm interested in. But it's the very same behavior that can, in some cases, lead to issues.


To me it always pulls me towards television or Hollywood garbage. And videos I have already watched, hundreds of them.


You should check if personalized recommendations are disabled. Google has a history of disabling/enabling settings without telling me.


are you sure that it's not you who knows very well how to curate their own content and who to subscribe to rather than the recommendation system?

I'm not sure heavy automation is needed here, people jump from content creator to content creator by word of mouth. In contrast most algorithmic suggestions to me seem highly biased towards what is popular in general. I click on one wrong video in a news article and for the next two days my recommendations are pop music, Jimmy Kimmel, Ben Shapiro and animal videos


Not for me, for example I've been watching a few PyCon and I/O talks, and it's been showing me other interesting PyCon talks that are highly ranked. It's also giving me good AutoChess and OneHourOneLife Let'sPlays, both of which I've been very interested in lately.

All three things I just mentioned are fairly niche, comparatively, yet it knows that I've been watching a lot of them lately and is giving me more of it.


I'm reminded of how Google images had an issue where dark skinned people sometimes turned up in a search for gorilla. 99.9% of the time, the image recognition algorithm did really well, but here was a case where an error was really offensive. What was (probably) needed was for there to be a human that comes in and, not tag every gorilla image, but simply to give it some extra training around dark skinned humans and gorillas, or otherwise tweak some things specific to that sort of case, so the chance of it happening was reduced to nearly nothing.

There are probably a ton of situations like that in YouTube, where certain kinds of mistakes are hardly noticed (it shows you a video you weren't remotely interested in), but others can be really bad and need special training to avoid (such as where it shows violent or sexual content to someone who likes nursery rhymes and Peppa Pig).



Maybe they can't make editorial recommendations for the long tail but they absokutely could do so for the top few thousand videos each week.

Would that yield an improvement? I don't know, but it would have an impact.


I'm kind of wondering if a "Ned Flanders" user-detector is possible.

Search for users who stop videos at "offensive" moments, then evaluate their habits. It wouldn't be foolproof, but the "Flanders rating" of a video might be a starting metric.

Before putting something on YouTube for kids, run it by Flanders users first. If Flanders users en masse watch it the whole way through, it's probably safe. If they stop it at random points, it may be safe (this is where manual filtering might be desirable, even if it is just to evaluate Flanders Users rather than the video). But if they stop videos at about the same time, that should be treated as a red flag.

Of course, people have contextual viewing habits that aren't captured (I hope). Most relevantly, they probably watch different things depending on who is in the room. This is likely the highest vector for false positives.

The big negative is showing people content they obviously don't want for the sake of collecting imperfect data.


Should we filter all the pro-choice videos or the pro-life videos?

Should we filter all the Santa-is-fake videos or the Santa-is-real videos?

Do you agree with Flanders?


Maybe Youtube and their revenue sources agree with him.



The question I have is how can they tell "Flanders" viewers from "bored" ones or "out of time" ones short of them flagging it without a lot of manual review and guess work?

Reviewing viewers on that level sounds even more intensive than filtering every channel and video.


In the system I've proposed, if there are enough test-Flanders thrown at the content the times closed should be different enough to trigger an unclear Flanders rating. This would indicate some other metric should be used.

I don't see this test working in isolation. Given it's nature, it's value is in obscure rejection statements rather than acceptance (or "okilly-dokillies" in this case).

To echo what others on this thread have said, there's a lot of content on Youtube. This means that even if they are cautious about which content passes through the filter for kids, there's still a lot available.


The problem is that just a few examples of the algorithm getting it wrong is enough to cause an adpocalypse. If millions of videos are uploaded every month then you can imagine how low the error rate has to be.


If Google takes the impractical route and hires a sufficient number of multilingual Ned Flanders, then they're still probably going to have a non-zero false positive rate (humans make mistakes too).

Whatever they do is going to have to be evaluated in terms of best effort / sincerity.

Semi-related: The fun of Youtube is when the recommendation algo gets it right and shows you something great you wouldn't have searched for. The value is that it can detect elements that would be near impossible for a human to specify. But that means it has to take risks.


The total number of videos really doesn't matter, it is the total number of creators, which at least this site claims is a total of 50m for all time: https://mediakix.com/blog/youtuber-statistics-content-creato... (first result I found)

Just start banning certain creators from showing up in recommendations if their content crosses the line. Not that hard if you are willing to do it.


But how would that solve the problem that the article opened with? There is nothing wrong with the videos of children playing, the wrong part was recommending them to pedophiles


Feels like the article was about more than that one issue. It also discussed creators slicing in frames of mickey mouse and other methods of gaming the alg. Most of the responses here seem to be buying into Google's hype around number of hours or videos uploaded per second. I think that is a distraction that lets them off the hook for not managing the community they built.

Every algorithm is an editorial decision.


No, the wrong part was when the pedophiles made inappropriate comments on the videos.


If that's the problem, then gibrown's solution

>Just start banning certain creators from showing up in recommendations if their content crosses the line.

also won't help, because it's not the creators that have content crossing the line, it's the commenters.


> They cannot use anything except an algorithm to recommend videos

That’s assuming recommendations need to be personalized. They could recommend at a higher level to groups of people using attributes like age range or region.

I’m not a fan of their personalized recommendations. It’s algorithm overfits my views to recommend videos extremely similar to videos I’ve recently watched, which isn’t really aligned with my interests.

If they took a completely different approach (not personalized) it could really impact the UX in a positive way.


No thanks. You try logging out and see the generic recommendations. It's the lowest common denominator, just like anything else targeted at large masses of people.


You are 100% not thinking big enough. These algorithms identify clusters. These clusters can be examined through random sampling. It doesn’t take a genius to spot that a cluster that involves children and pornography might have some problems.

Of course, the system doesn’t expose these kinds of outputs, because no-one has any interest in designing such a system and taking responsibility for the content.


> man does this person know how many videos and how many users YouTube has

While that might be true, 99% of the views are a very small subset of the videos posted. It's completely doable, or at the very least the problem can be greatly mitigated by putting more humans into the process and not letting the algos recommend videos that haven't been viewed by someone in Youtube's equivalent of "standards and practices". All that being said, I fear the primary reason this is not done is because such actions would reduce the number of hours of viewed videos and ad revenues. In fact, I've read articles supporting this theory.

Google under Pichai is basically like Exxon under Lee Raymond--solely focused on revenue growth and completely blind to any number that doesn't show up on the current and next quarter's income statement.


pichai doesn't come off as enthusiastic. I am a heavy Google product user. Watch all the hardware, I/O events etc, I have seen him use the same sentences multiple times over the past 2 years across events. I get that he won't exude the same charm, excitement as a founder-CEO, nevertheless a lot is left to be desired. A lot of his public statements feel like carefully crafted PR responses. Nothing wrong with crafted responses. When you are a 800 Billion$ company, you gotta be careful, but at least try to give off the perception of being authentic. Google is really bad at the perception game. Apple's really good at that. But I have a strong dislike for stupid moves, even more so than bad moves and Google has made lots of those stupid ones.


Just to add on, a Youtube executive was recently on a podcast and she said there are 500 videos uploaded per second.


Probably Neal Mohan on Recode right? The current public number is 500 hours per minute. But that number has been floating around for a while. It's probably higher now.


thats.... actually shockingly few


The stat I heard while at Google (~5 years ago) was that 8 hours of video is uploaded every second. Cross-checking that against the 500 videos/sec figure, it implies that the average video is about 1 minute. I suspect the 8 hours figure is pretty out-of-date now, and it's more like 20 hours/sec.

BTW, you could do some simple math to figure out how many employees it'd take to have a human watch every video that comes in. 3600 secs/hour * 20 hours of video/sec = 72000 secs/video/sec, * 3 to assume 8 hour shifts = 216,000 employees, * $30K/year = $6.4B/year. It's theoretically doable, but you wouldn't get the product for free anymore.


$30k/year seems high. This is the sort of work that would be likely outsourced, perhaps to the Philippines for less than $10k/year per person.

$2B is still nothing to sneeze at, but it's less than Microsoft paid for Minecraft.


$30K/year is minimum wage in Sunnyvale and Mountain View, where Google headquarters is.

YouTube could probably outsource it internationally, but that'd just spark a new round of outrage: "Why are global community standards set by an American technology company outsourced to poor workers in the Philippines? Are these the people we want deciding our values?"


This is probably not the thought process this issue would travel down. Costs are typically the first consideration in a semi-skilled position if native english sounding isn't a requirement.


Because you'd be able to get humans with higher intelligence and better judgement for 10k/year in the Philippeans, than at minimum wage in the US.


They already outsource their moderation to mostly the Philippines so there’d be no change.


Considering that rumors are that YouTube is still barely above break even, that is a lot.


>> $2B is still nothing to sneeze at, but it's less than Microsoft paid for Minecraft.

One is an investment/one time purchase and the other is a long-term annual liability, slated to grow.


A billion videos per year is shockingly view?


They don't care, they want to push them into approved content rather than recommended content. Aka "these are the topics that you are allowed to speak of".

See current Pinterest scandal and banning from Youtube of any video mentioning this.


All true. But all of this is making me wonder - what are the people thinking who say they can't wait for our society to be run by AI? The apex of AI capability can't even recommend videos properly right now, and we want it to run all the aspects of our society?! No, thanks.


What those people actually mean is "I can't wait for AI to be so good that it'll be obvious that it should run all the aspects of our society". The current state is irrelevant, nobody wants to put those in charge.


What are the people thinking who say they can't wait for our society to be run by humans? The most common state of human government capability can't even put human suffering before numbers in a virtual bank account, can't prioritise truth over falsehood, can't restrain themselves from bribery, can't reliably turn up to hearings or ballots, can't organise projects and complete them, can't compromise when millions of people depend on it. We want to dismiss alternatives which haven't even been developed yet for not being good enough?


The argument is that a hypothetical benevolent ASI can't be corrupted like literally all humans can. Those people are likely referring to AI's as they appear in Ian Banks The Culture series.


Such an effort would cost literally millions of dollars and surely sink this fledgling startup


I don't think sarcasm with no substance behind it is very insightful.

Humans are involved in the process. To suggest otherwise is to be willfully ignorant.


Ah, the classic “think of the children!” argument. It is no one’s responsibility other than the parent to ensure their child isn’t watching inappropriate content (which will be different for every family and individual).

This article suggests that machine learning and collaborative filtering are incapable of producing healthy recommendations. I beg to differ, the New York Times may not like the result but they work for the vast majority of users on any service with too much content to manually curate.


I don't think that's the point. It is false advertising for YouTube to create YouTube Kids for kids, and then not have content that is appropriate for kids on it.


> This article suggests that machine learning and collaborative filtering are incapable of producing healthy recommendations. I beg to differ,

The article cites actual instances and recurring problems showing that "machine learning and collaborative filtering are incapable of producing healthy recommendations.": Even when YouTube tried to produce child friendly content, they failed. You can't just say "it's fine" after the article shows it not being fine.


Setting aside the personal responsibility angle for the moment (which I agree with you on!) don't you think that negative externalities should generally be managed?

YouTube is a paperclip maximizer (where paperclips correspond to eyeball-hours spent watching YouTube) and at some point optimizing paperclips becomes orthogonal to human existence, and then anticorrelated with it.

I think it's a perfectly fair thing to say that maybe the negatives outweigh the positives at the present.

(This argument doesn't apply solely to YouTube, of course)


I generally agree with you, but I think YouTube being safe for kids became their problem when they launched a version specifically for kids and marketed it as safe.


> It is no one’s responsibility other than the parent to ensure their child isn’t watching inappropriate content

Society has had laws in place to prevent children from viewing things they should not be (inappropriate movies, magazines, etc).


What law is there to prevent a kid from going on the internet and going to “inappropriate” sites? Watching video on cable? Finding their Dad’s Playboy magazine back in the day?


On cable there are ways to lock out channels, setting ratings on the TV and all that. If dad doesn't hide his Playboy well enough, it's obviously on him to fix it.

On the internet it is much more difficult, of course, and we can't realistically expect some shady offshore site from implementing age checks, let alone recommendation algorithms. But Google is a public, respected company from a first world country that claims to be promoting social good (which, of course, is marketing BS, and even if it weren't I would not want their idea of social good, but still). You'd think that they would invest some effort into not showing inappropriate content to kids at least. But no, they throw up their hands and go on ideological witch hunts instead.


I’ve got an idea - don’t let your kids get on YouTube and only allow them to get on curated sites. You can easily lock down a mobile device to only allow certain apps/curated websites.


I don't let mine anywhere near a TV or computer. Of course that might be a bit more difficult once tghey get old enough to actually reach the keyboard...

But then I try to not let my mom on YouTube either. Or myself, for that matter.


lol, do you even children. They will always find a way. You can restrict apps and services all you want. How about their friends at school? Are you going to restrict their phones as well? The only thing that works is actually talking to the kids about things they've seen/experienced. Not saying that is easy of course.


No we don't - not in the US. Short of literal pornography that could fall afoul of corruption of a minor the state isn't involved. That is just from ratings cartels and pressure groups.

If nobody gives a fuck enough to affect business you can give the complete SAW series to 3 year olds and all the offended can do is yelp indignantly.


Nope. This only applies to pornography if I recall correctly. There's not laws against showing R rates movies to kids, it's just the theaters that refuse to admit them. In 2011 the courts struck down a California law prohibiting selling I'd M rates games to minors, too.


This implies there is not a society benefit from healthy options.

The parents are the most well placed to know at an individual level. But responsibility is a cop out, if you are just dropping it on someone.

Granted, I agree it is a hard problem. Not even sure it is solvable. :(


There are healthy recommender systems, like Spotify.

YouTube is a _disastrously_ unhealthy recommender system, and they've let it go completely out of control.


Spotify's recommendation system is dealing mostly with artists that have recording contracts and professional production- their problem shouldn't be compared to YouTube's which has to deal with a mix of professional, semi-pro, and amateur created content. Also there's more of a "freshness" aspect to a lot of YT videos that isn't quite the same as what Spotify has to deal with (pop songs are usually good for a few months, but many vlogs can be stale after a week). Not only that, but many channels have a mix of content, some that goes stale quickly and some that is still relevant after many months- how does a recommendation engine figure that out?

It's better to compare Spotify's recommendations to Netflix's recommendations, which also deals with mostly professional content. Those two systems have comparable performance in my opinion.


Why the content exists is also important. People create video specifically for Youtube. Very few people create music just to host it on Spotify. This results in the the recommendation algorithm and all its quirks have a much bigger impact on the content of Youtube than Spotify. Also having that many people actively trying to game the recommendation algorithm can pervert that algorithm. That simply isn't a problem for sites like Spotify or Netflix.


>YouTube is a _disastrously_ unhealthy recommender system,

Can you explain with more details?

I use Youtube as a crowdsourced "MOOC"[0] and the algorithms usually recommended excellent followup videos for most topics.

(On the other hand, their attempt at matching "relevant" advertising to the video is often terrible. (E.g. Sephora makeup videos for women shown to male-dominated audience of audiophile gear.) Leaving aside the weird ads, the algorithm works very well for educational vids that interests me.)

[0] https://en.wikipedia.org/wiki/Massive_open_online_course


Yes. Elsagate is an example - the creepy computer-generated violent and disturbing videos that eventually follow children's content - or the fact that just about every gaming-related video has a recommendation for an far-right rant against feminism or a Ben Shapiro screaming segment. There's also the Amazon problem - where everything related to the thing you watched once out of curiosity follows you everywhere around the site.


>Elsagate is an example,

Yes, I was aware of Elsagate.[0] I don't play games so didn't realize every gaming video ends up with unwanted far-right and Ben Shapiro videos.

I guess I should have clarified my question. I thought gp's "unhealthy" meant Youtube's algorithm was bad for somebody like me that views mainstream non-controversial videos. (Analogy might be gp (rspeer) warning me that abestos and lead paint is actually cancerous but public doesn't know it.)

[0] https://news.ycombinator.com/item?id=20090157


> I don't play games so didn't realize every gaming video ends up with unwanted far-right and Ben Shapiro videos.

They don't. That's confirmation bias at work.


It's not 100%, but I'd consider "video games" => "Ben Shapiro" to be a pretty awful recommendation system, regardless of the reasoning behind it. As far as I know, the group "video gamers" doesn't have a political lean in either direction.

I've definitely seen this with comics. I watched a few videos criticizing Avengers: Infinity War, and now I see mostly Ben Shapiro recs. It makes no sense. I never have (and never plan to) seek out political content on YouTube.


I watch a number of gaming videos and have never had a far-right video recommended. Don't know who Ben Shapiro is.

It could be the type of games involved, since I usually watch strategy, 4x, city-building, and military sims. I usually get history-channel documentaries or "here's how urban planning works in the real world" videos recommended, which suits me fine. Somebody whose gaming preferences involve killing Nazis in a WW2-era FPS might be more likely to get videos that have neo-Nazis suggesting we kill people.


Some of the child comments of your thread mention the nazi problem.


But that child comment didn't link Nazis to normal "video games". I assumed he just meant some folks (e.g. "1.8%" of web surfers) with the predilection for far-right videos would get more Nazi recommendations. Well yes, I would have expected the algorithm to feed more of what they seemed to like.

I do not see any Nazi far-right videos in 1.8% of my recommendations ever.


Isn't that an inevitable side effect of collaborative filtering? If companies could do content based-recommendation, wouldn't they? Until purely content based recommendations are possible, wisdom of the crowds via collaborative filtering will lump together videos that are about different things but watched by similar viewers.


Spotify simply does not have the content over which an algorithm could loose control.


Spotify has 40M tracks total. On YouTube, more than 5B videos are watched by users every day. Different scales of problem demand different solutions.


I don't know what the comment you are replying to meant, I interpreted it to mean the algo takes you down a rabbit hole to darker content, however for me I miss the days when it actually recommended relevant videos, similar to the one I was watching.

My entire sidebar is now just a random assortment of irrelevant interests. For instance I wanted to learn to play a denser piano chord, I learned it ages ago but I still get like 20 videos that explain how to add extensions to a 7 chord, even if I'm watching a video on the F-35 fighter pilot.


I completely disagree, my children have a wonderful time following the recommended videos that youtube provides. I'm interested to hear your reasoning on why it is "disastrous".


How is Spotify's different from Youtube?


I'm pretty sure all content on Spotify gets manually curated first, so abusive tagging doesn't happen, and some of the worst content simply doesn't get uploaded at all. Spotify also doesn't try to be a news site, so they can afford to have a couple week's lag between uploading a song and having it show up in people's recommendation feed.


More selective recommendation, all-subscriber environment.


I disagree in some sense. I personally have found the recommending system on YouTube pretty good for the main page of the site. The thing that bugs me is the recommended bar right (or bottom right) of the videos, which can be really annoying and infested with clickbait etc.


It's easier, and more profitable, to write a book than confront your kids about screen time.


I want to place a kid in front of a screen, press a button and walk away. How am I supposed to do that now?


What about when youtube marketed a specific product for children, but then it turned out they were letting really, really weird stuff in there.


>It is no one’s responsibility other than the parent

Yes, but you _must_ understand that most (no, ALL) of the millennial generation grew up with public content over the airwaves that was curated and had to pass certain guidelines. So many parents think that the YouTube Kids app is the same thing. it's not!

If YouTube want to be the next Television, they're going to have to assume the responsibilities and expectations surrounding the appliances they intend to replace. Pulling a Pontius Pilate and tossing the issue to another algorithm to fail at figuring out is not going to fix the problem.

Thankfully, there's much more out there than YouTube when it comes to children's entertainment, actually curated by human beings with eyeballs and brains, and not algorithms. The problem is that parents don't know these apps even exist, because YouTube has that much of a foothold as "place to see things that shut my kid up, so I can see straight."


I don't think this is incentivizing bad behavior. It's merely showing the viewer more of what they are already watching with a gradual introduction to broader material. The example of a youtube serving content to "pedophiles" is borderline asinine. The neural network is just making suggestions on viewing, it's not telling people to watch. In regards to the complaint that "adult" content is being served to adolescents, there is an option to filter out sensitive content all together.

Also, as a parent to 4 children myself, the idea of letting my kids loose on the internet completely devoid of any supervision is ridiculous. When did it become youtube's responsibility to parent the children in its audience? Should we also ban HBO, Amazon, and Netflix from providing recommendations because it might be a child in front of the screen?

This is just another pointed attempt to censor free speech via the abuse of technology companies. The idea being that the platform will be restrictive if they are constantly badgered about it.


> with a gradual introduction to broader material.

It doesn't gradually introduce broader material, it gradually introduces more "engaging" material.


I would argue that your point is semantics, but even so you still have a choice of whether or not to watch the recommended more "engaging" material. It doesn't change the overall point of my statement.


I'd say it's quite a different point. My own experience has been that the recommended "engaging" material is something in the same genre as whatever I just saw, but with a clickbaitier title, flashier thumbnail, and overall poorer informational quality. It's the different between saying "I see you enjoy sandwiches, maybe you would also enjoy salads or a plate of sushi" and "I see you enjoy sandwiches--here's a candy bar, an off-brand soda made with high-fructose corn syrup, and a carton of cheap grocery store ice cream."


The semantics argument I was pointing out was in regards to "broader" vs "engaging". That's not what my statement was about, it was that no matter what the algorithm recommends to you, you still have the choice whether or not to watch it. The point you are making is purely anecdotal as I assure you the neural network is not simply showing you

>same genre as whatever I just saw, but with a clickbaitier title, flashier thumbnail, and overall poorer informational quality


You can keep telling yourself that you have a "choice", but in the end we all are just humans, with quite predictable behavior. Bias selection of content is since forever one of the more effective ways of shaping opinion. Politics is fighting hard on that front for a reason. For the first time ever are some very few algorithms selecting content for millions of people, with apparently little human oversight. Yes, this should worry us. Simply assuming the results of those will benefit mankind, especially in the long term, would be foolish. It's not quite exactly like the usual ai safety paperclip scenario, but by now it should be very obvious that optimizing watch-time, even with current "ai", comes with significant unintended side effects / drawbacks.


> just making suggestions on viewing, it’s not telling people to watch

I’m not sure I get the difference between suggesting content and telling people what content to watch. Were you trying to drive a different point ?

That aside, it seems your argument is that youtube being neutral in recommending videos shelters them from blame, while the article is basically about why being neutral is harmful.

I personaly think anything dealing with human content can’t be left neutral, as we need a bias towards positivity. Just as we don’t allow generic tools to kill and save people in the same proportion, we want a clear net positive.


To make my first point clear, here is a scenario:

I walk up to you on the street and suggest you give me a dollar.

vs

I walk up to you on the street and take a dollar from you by force.

Youtube is a platform, in order remain a platform it MUST remain neutral. You cannot have an open forum with bias. There are certain mutually agreed upon rules, (no nudity, extreme violence, etc.), those limitations are more than enough to handle the vast majority of "negative" content.

I whole heartedly disagree that we need a bias towards positivity. Who determines what that definition is? Something you see as negative, I might happen to enjoy. If Youtube begins to censor itself in that way it is no longer a platform and is now responsible for ALL of its content.


Thanks for the clarification on the first point. Won’t youtube effectively shove the next recommended video to a user as long as auto-play is activated ?

Also they are the default view, I’d argue suggestions are a lot more than just “suggestions”. It would be akin to a restaurant “suggesting” their menu, and you’d need to interrogate the waiter to explore what else you could be served. For most people the menu is effectively the representation of the food of the restaurant.

For the neutrality, if you recognize there are agreed upon rules, as you point out, the next question becomes who agreed on these rules, and who made them ?

Who agreed nudity should be banned ? Which country ? What nudity ? and art ? and educational content ? and documentaries ? at which point does it become nudity ? The more we dig into it, the more it becomes fuzzy, everyone’s boundary is different, and all the rules are like that.

Any rule in place is positive to a group and negative to another, for a rule to stay in place it needs to have more supporters than detractors, or put it another way have more positive impact than negative ones.

The current set of rules are the ones that were deemed worthwile, I think it’s healthy to chalenge them or to push for other rules that could garner enough agreement to stay in place.


> Won’t youtube effectively shove the next recommended video to a user as long as auto-play is activated ?

You can very easily turn auto-play off. There is plenty of opportunity to switch videos. It would be different if youtube forced you to watch the next video in order to use the site.

>For the neutrality, if you recognize there are agreed upon rules, as you point out, the next question becomes who agreed on these rules, and who made them ?

Youtube made them. Those are pre-conditions for uploading videos. They don't have to have any reason why they made them, those are conditions that must be met in order to upload a video. So by uploading a video you are agreeing to them.

>Any rule in place is positive to a group and negative to another

I don't agree with this generality. However, this discussion is not about the legitamacy of the rules to use youtube, it is whether or not youtube should censor videos, (that meets basic rules of use). My opinion is no, your's as you stated above was:

>I personaly think anything dealing with human content can’t be left neutral, as we need a bias towards positivity.

I agree with you that Youtube should routinely challenge their own rule sets. That is not the same as censoring their content, or in this case modifying their recommendation algorithm.


The broader material is the problem. It’s not a natural way of using recommendations: it’s just an ad at that point.


I think YouTube has just exposed the kind of content people were already interested in, and possibly consuming outside of the public eye. We find it frightening that people readily click on abhorrent content. When they probably were doing it over other platforms earlier. The internet had gore videos for the longest time. I remember a shotgun suicide video that kids in my school used to shock each other with. If Google as a private company chooses to ban content, than that is their right, but an apriori expectation that an entertainment platform should control peoples social behavior and enforce morality is harmful in a free society IMHO.


People were fueling industries of creatively bankrupt content well before the Internet came around, just look at the long term existence of tabloids.

Youtube is optimizing for the underlying psychological mechanisms that put people in that mood because it makes them suggestive and because none of this stuff has substance or meaning they can graze on it like how junk food producers want to promote.


I think the analogy to junk food is instructive. Both fast food and YouTube maximise revenue while minimising costs by exploiting human flaws and foibles, and do so much more effectively than was possible 100 years ago. It is creating an environment that is radically different than the one we evolved in.

Watching hours of YouTube - obesity of the mind. Kind of.


>Youtube is optimizing for the underlying psychological mechanisms that put people in that mood because it makes them suggestive and because none of this stuff has substance or meaning they can graze on it like how junk food producers want to promote.

Well, YouTube (or any advertising platform) also wants people clicking on ads and actually buying things, not just graze. AFAIK they already demonetize content that is not advertiser friendly, and thus de-prioritize it. Busy professionals with limited free time are your best bet for people with a lot of disposable income. If anything YouTube optimizes for content that is safe-for-work, and will quickly lead to you opening your wallet. But yes, I think this is a large scale multi-variate problem, and individual simple metrics don't cut it.


I doubt this person does not care about the subject they wrote about.

And if the algorithm is producing negative side effects, then, of course, it should be looked at and changed.

I'm no expert myself, but to my understanding: any algorithm is limited by its data set.

Based on its data set, an algorithm comes to conclusions. But one can then, of course, ask: what's the basis for these conclusions?

I recall reading that a certain AI had been fooled into thinking a picture of a banana was showing a toaster or a helicopter, after a few part of the image were changed to contain tiny bits of those items.

It turned out that the AI used the apparent texture on places in the image to determine what was on the image, rather than doing a shape comparison.

Which sounds like a time-saving measure. Though it may very well have been the method that most consistently produced correct results, for the given dataset.

Frankly, the attitude of "we don't know how it works and we don't care" cannot possibly end well.

Neither the attitude "oh well make a better dataset then".

I get that we're all excited about the amazing abilities we're seeing here, but that doesn't mean we shouldn't look where we're going.

I recall a story of an AI researcher who didn't want to define anything because he was afraid of introducing bias. Upon hearing this, his colleague covered up his eyes. When asked why he did this, he replied: "The world no longer exists". And the other understood.

Because of course the world still exists. And just the same way: it's impossible get rid of bias.

Some human intervention is needed. Just like constant checks and comparison against human results.


The problem of the dataset is not just that AI will pick shortcuts and naive heuristics, because humans will too.

The problem of the dataset is that you're not in control of who populates the dataset and what their intentions are. There's no understanding of an adversarial model and threat handling.


The NYTimes uses a very human "algorithm" to determine what to report on and if you look at the comparison of causes of death to what's reported it's wildly off:

Data: https://ourworldindata.org/uploads/2019/05/Causes-of-death-i...

This isn't a knock against the NYTimes so much as it is of humanity, we're all fascinated by the lurid and sensational (note that the Google searches are similarly off) and this permeates all levels of life.


I feel like things were mostly fine until the 2016 election, after which journalists became _very_ concerned. If I had a nickel for each, “The algorithms are coming! The algorithms are coming!”, I’d be rich. I mean, I didn’t like the outcome either, but these types of articles seem to motivated by a) finding a scapegoat and b) wanting to use “algorithm” in a sentence.


What a pleasant way of stating that humans are basically good. We just keep passing the buck. "We'd be fine if it weren't for this algorithm!"

    We believe that man is essentially good.
    It’s only his behavior that lets him down.
    This is the fault of society.
    Society is the fault of conditions.
    Conditions are the fault of society.
If you ask me, "YouTube's algorithm" is simply exposing the way humanity is. And trying to get an algorithm to "shepherd" humanity to be better is simply Orwellian.


> If YouTube won’t remove the algorithm, it must, at the very least, make significant changes

It must? No, it doesn't have to do a damn thing. It's a product from a publicly traded company, therefore it "must" return value for stockholders. That means more behavior that increases ad revenue. The author is out of touch with reality. Stop feeding your kids youtube if you don't want them exposed to youtube. It's a private service(youtube), not a public park.


> It must? No, it doesn't have to do a damn thing.

Subject to the laws of the jurisdiction in which it operates, of course. We could - if we so wanted - pass laws to regulate this behavior. That is perhaps the best option, in my own opinion.

> It's a product from a publicly traded company, therefore it "must" return value for stockholders.

The dogma that it "must" return value for shareholders is not an absolute rule[1]; rather it's a set of market expectations and some decisions from Delaware (which have an outsize impact on business law) that encourage it. But it's not required. In fact, many states allow a type of corporation that specifically and directly allows directors to pursue non-shareholder-value goals - the benefit corporation[2].

> The author is out of touch with reality.

Please re-read the HN guidelines[3].

> Stop feeding your kids youtube if you don't want them exposed to youtube. It's a private service(youtube), not a public park.

This is the doctrine of "caveat emptor," essentially - that a consumer is ultimately responsible for all behavior. However, a wealth of regulation exists because that's unworkable in practice. The FDA and the EPA come to mind, but we also regulate concepts like "false advertising." Your stance here ignores the realities of life in service of ideological purism.

[1] http://web.archive.org/web/20190327123200/https://www.washin...

[2] https://en.wikipedia.org/wiki/Benefit_corporation

[3] https://news.ycombinator.com/newsguidelines.html


No we cannot pass laws that do that no matter how indignant we may be. The whole bloody point of the constitution is that no matter how pissed off the majority (or "the majority" which is just a noisy minority as it may be) is that you cannot simply legislate away rights.

The vague "do something!" regulation push has all of the marks of a moral panic and all participants should slap themselves hard enough to leave a mark and repeat "It is never too import to be rational."


Please explain what rights would be legislated away in this case. It's definitely not the 1st amendment - you can still say what you want, just not on necessarily on the platform of your choice. This was equally true in the broadcast TV days. So what other right(s) would be legislated away by regulating Youtube's content?


Broadcasters had the special pleading with some scintilla of a point in that there were actual shared commons to prioritize. In practice it was a fig-leaf as you never saw arguements in broadcast censorship over 'values' to wrestle over airwave ownership but instead bullshit doctrines like 'community standards'. The fact that the US has a long history of laying out rights for all, seeing the revolutionary implications and then saying 'No wait that can't be right it is too different.' and going back to the bullshit control they had before for a few centuries is a whole other sad topic.

One thing that did make it through that was the ruling that mediums which lack said limitation like cable and internet don't have the rationale for that restriction and thus the censorship that weak minds had become accustomed to vanished in a puff of logic. This has been the case since cable porn channels were a thing.

By regulating YouTube you effectively regulate what /all/ platforms may push. It isn't simply that YouTube decides that "You know what we don't want to post that." - an exercise of their collective Freedom of Association but "The government doesn't want us to post that so we can't." You can't just deputize tasks to third parties and expect the limits on exercises of power to vanish. Otherwise we'd see hordes of private detectives as a work around to Fourth Amendment rights.

Said regulations on youtube would be a major infringement upon freedom of the press and speech. Not to mention it is logically equivalent to censoring your own press is whenever it fits whatever criteria they dislike.


No. As you yourself recognise (presumably, as you put the "must" in scare quotes and italics), that companies "must" maximise shareholder value is a goal contingent on our decisions and policies, not some natural law.

Of course, it is incumbent on us individually to behave responsibly. But there is space for public policy and regulation, even of YouTube.


Incentivizing value for stockholders above all else is a good way to ensure incredibly anti-consumer practices grow in popularity. Something you might only begin to notice when your kids start getting recommended games from their friends that require you to gamble with IAP to get a new dance or something.


This seems like it takes some notes from Veritasium's theory on YouTube's recommendation algorithm which he posted after his initial reservoir shade balls video went viral. (edited for clarity)

https://www.youtube.com/watch?v=fHsa9DqmId8 for his theory.


YouTube's incentives are the best among such platforms IMO. They allow a simple profit sharing model where a part of the ad-revenue goes to the content creator. This is unlike instagram, for example, where the content creators have to peddle products in their ads to make money. Take a fitness channel for example - on YouTube, the content creator can just be honest, and the views alone will guarantee income. On the other hand, on instagram, they have to resort to selling snake oil. I love YouTube for this, and I am constantly amazed by how YouTube has become a livelihood earner for so many.


http://archive.is/6lbCR

(Archive link for those who prefer non-broken web experiences)


It's all about advertising money. TV and newspapers are dying and they need someone to blame.


I personally think this has deeper political motives as well, but yes I completely agree with you!


I'm sure Google and Facebook understand this, hopefully they won't cower any further. Big Media wants its "fair share" and they will keep attacking until they do.


I don't know if YouTube's problems are so bad that the argument applies in this case, but in general, "We can't comply with this regulation, it would be too difficult at our scale" is not considered a valid defense. Just as banks shouldn't be allowed to get so large that they can't fail without wreaking havoc on the economy, if algorithmic recommendation and moderation can't work, then maybe social networks shouldn't be allowed to get so large that human moderation is not possible.


That is an apples to oranges comparison, Youtube is a platform not an institution. It is open to all videos, provided they meet certain agreed upon guidelines, and should not be responsible for censoring content based on individual opinions.

I don't think that the recommendation is broken at all, in fact it works astonishingly well for the vast majority of people. The fact that there are a few bad actors is also present in the banking industry, (Wells Fargo for instance), to use your own bad comparison.


YouTube is asserting editorial and publishing rights when it promotes certain videos, if it were a pure video hosting site (providing a link to uploaded videos for people to do with as they please) then I'd agree they were just a platform, but a newspaper isn't a platform and neither is YouTube.


Youtube is asserting on behalf of people who own the publishing rights and not on behalf of themselves. This is an important distinction. Youtube is not the same as a Newspaper in any way shape or form, I don't really understand your comparison.


The queue for getting your video posted on YouTube would grow infinitely. (Or, more realistically, people would give up and not bother once it takes years.)

But I guess they could charge money to get to the head of the line?


The queue for having your video uploaded and public does not at all have to be the same queue for getting your video included in others' recommendations.


I can just see the outrage now: "YouTube running a pay-to-play scheme for exposure. Anyone can upload their video, but only the rich can get an audience!"

Come to think of it, this is basically the complaint against AdWords and the gradual takeover of the search result page by paid results.


This is exactly what happens. Prager U and Ben Shapiro advertise heavily on content adjacent to them (gaming) and their views go up, up they go in the algorithm.


There could be a middle ground where videos have limited visibility until getting vetted, or a karma system to fast track regular uploaders etc.

I think there’s a ton of ideas to be tried.


That's not true you can upload a video and not allow it to be recommended until some human review was done. Most youtube channels don't need the recommendation engine.


That just isn't feasible. Videos would literally take years to get into the recommended status - another comment pointed out there are 500 new videos uploaded per SECOND.


If there was one dude, sure. But apparently YouTube is in the business of supporting the upload of 500 videos/second so they need to deal with the consequences of it. It's not like there's any regulation forcing them to be the place everyone uploads videos to and there are some valid competitors (though they're far less into the publishing/editorializing facet - vimeo is much more often direct linked for instance)


To be clear, I am not speaking for anybody in this thread but myself.

But I will unapologetically and forthrightedly say that, yes, if we're going to assert that YouTube has certain responsibilities for the nature of the videos that it hosts, and that it turns out that the nature of those responsibilities is such that YouTube can't possible meet them, then, yes, YouTube as we know it should be essentially shut down, at least going forward.

I am NOT going to say we should deliberately craft the responsibilities in such a way that YouTube is deliberately shut down. However, if it turns out that they are incapable of applying even the bare minimum effort that we as a society deem it necessary for them to apply, then, yes, it is absolutely a consequence that YouTube as we know it today may have to be so radically altered as to be a different site entirely.

In the general case, when the law requires certain obligations of you as a business, and you as a business can not meet them, that does not mean that suddenly those obligations are not applied to you. It means that your business is not legally viable, and needs to change until it is. It may be the case that there is no solution to being legally viable and being profitable, in which case, your business will cease to exist. Just as there is, for instance, no solution to being a business built around selling torrent files containing unlicensed commercial content to people. You can't defend yourself by saying you can't afford to get the licenses; your suitable legal remedy was to never have started this business in the first place. There's some concerns around grandfathering here to deal with, certainly, but they can still be affected going forward.

There is no guarantee that there is a solution where a company exerting whatever minimal control they are obligated to assert by society is capable of growing to the size of YouTube. If that is the case, so be it. The solution is not to just let them go because they happened to grow fast first.

(My solution to freedom of expression is an explosion of video sites, where each of them has ways of holding the videos to the societally-mandated minimum standard, and no one site can do it all because they simply can't muster the resources to be The One Site, because as they grow larger they encounter anti-scaling effects. Given how increasingly censorious Silicon Valley is becoming, as we are now into censoring the discussions about censoring discussions like the recent removal of Project Veritas from Twitter for its discussion of Pinterest censoring pro-life films, I expect this to increase the range of expression, not diminish it.)


Not speaking on behalf of what I want, but on behalf of what is true:

> It may be the case that there is no solution to being legally viable and being profitable, in which case, your business will cease to exist.

Or your business will exist illegally.

There's this interesting interplay between law and economics, where law is generally taken as a prerequisite for frictionless commerce, and yet at the same time if activities that large groups of people wish to partake in are made illegal, the market just routes around them and black markets spring up to provide them. Prohibition. The War on Drugs. Filesharing. Gambling. Employing illegal immigrants. Usury. Short-term rentals. Taxi medallions. Large swaths of the economy under communism.

There are a couple other interesting phenomena related to this: the very illegality of the activity tends to create large profits around it (because it creates barriers to entry, such that the market often ends up monopolized by a small cartel), and the existence of widespread black markets erodes respect for rule of law itself. When people see people around them getting very rich or otherwise deriving benefit from flouting the law, why should they follow it?

Switching to editorializing mode, I think that this gradual erosion of respect for law to be quite troubling, and I also think that the solution to it needs to be two-fold: stop trying to outlaw behaviors that are offensive to some but beloved by others, and start enforcing laws that if neglected really will result in the destruction of the system.


"Or your business will exist illegally."

True.

In the context of this particular case, I was assuming that nothing the current size of YouTube could exist illegally, as that would imply that whatever authority was declaring them "illegal", but not capable of doing anything about it despite it nominally living in its jurisdiction, must be anemic and impotent to the point of being nearly non-existent.

There's already an underground proliferation of video sites, spreading copyrighted content out of the bounds of what the rightsholders want, so it's pretty much assured we'd end up with illegal alternatives. :)



Some of that can be alleviated by trusted publishers, ie fox,cbs,abc... Won't need a review. Introduction of a paid queue. Just because they don't want to do it today doesn't mean it's an impossible solution just a hard one.


That sounds like the exact shit people left TV for. Lets not recreate television oligarchies for the sake of those afraid of change.


> Most youtube channels don't need the recommendation engine.

This is just not true. A massive part of the views originate from recommended/up next. Ask pretty much any creator. Only the core audience of a channel will have the notification bell on for a specific channel. Many users don't check the Subscription section and either link in from an external source, know beforehand what they want to search for or just watch what pops up in recommended.


> but in general, "We can't comply with this regulation, it would be too difficult at our scale" is not considered a valid defense

This is a great point that I was going to phrase slightly differently: if YouTube is too large to be able to prevent harm, YouTube needs to be regulated. YouTube get the benefit of being so large, so they should also get the cost.


Agree with you. If you can't do your job then maybe you'll have to be shut down.


Since when did it become YouTube's responsibility to police speech?!


Disclaimer: I work for YouTube, my personal view on the situation is this:

Bear in mind that YouTube does not operate only in the US with unhinged free speech laws. Many countries have stricter laws and YouTube definitely needs to comply with them.

Other than that, adpocalypse happened because of bad videos being surfaced by the algorithm so another responsibility is to the creators. (And shareholders)

There is nothing to be gained by having crap in your backyard.


It did when people started demanding it. A company doesn't exist in a vacuum.


When they started making editorial decisions about which videos to promote and to whom -albeit via an automated process.


YouTube needs no defense in this case because video recommendations are protected free speech. In the US at least it would be impossible to outlaw video recommendations in a way that would pass Constitutional review.


In addition to this, seeing content creators being slaves to the algorithm is an eye-opening experience. Especially when it comes to the children's videos. It's all computer generated garbage powered by responses to changes in algorithms. If kids suddenly watch more content with alligators, prepare for that being the only thing created, recommended or playing. It's wild.


Still looking for recommendations that are influenced by people's best-self intentions of who they want to be, rather than influenced by their worst-self behaviors.


We know we have to keep kids sheltered from people which may have unscrupulous neural networks at play looking for a way to further their own goals at the expense of a child's well being and overall health.

Engagement on the internet is also being driven by neural networks that are learning to adapt to the users brain chemistry to statistically modify behavior for maximum engagement/profit. Perhaps it is time to realize that these services are going to be analogous to a random stranger offering your kid candy for their own twisted goals that are unlikely compatible with a child's well being. If you consider a service like YouTube as an untrusted source of interaction perhaps you'll be as likely to block or monitor it the same as random chat rooms.


YouTube can be a source of astonishingly great education and entertainment that will help grow society, as well as astonishingly horrid corruptions of nearly anything that will corrode society at it's roots.

Most of these discussion posts seem to miss the point that 'engagement' or 'upvotes' does NOT equal value.

Also missing is the concept that a company with a massive platform has any social responsibility to at least not poison the well of society.

And claiming "it's the parent's responsibility" may have some truth value, but it does not and should not be an excuse to absolve the platform owner of responsibility.

The key to longer term success of the platforms is to abandon the low-hanging-fruit of "engagement" as a measure value and develop more substantitive metrics that actually relate to value delivered, both to the individual watcher and society as a whole.

As one audience member, I find their recommendations to be basically crap, nearly never leading me to something more valuable than what I just watched (sure, they'll occasionally put up a recommendation that has enough entertainment value to watch, but much of the time I want my 5min back). To find any real value, I need to search again. That already tells us that their "engagement"-based algos are insufficient to serve the needs.


I think there is an inherent in optimizing for retention time. Ideally, the recommendation should help find stuff which improve people's health, make them happier, or more informed about the world. However, it doesn't seem like YouTube has metrics on those things. Furthermore, things like that probably can't be determined very quickly on new content.


I mostly watch movie reviews on YouTube and I'm constantly being recommended either weird Joe Rogan, alt-right content, or makeup videos. I don't get it. I've never clicked or watched anything remotely associated with it. I suspect a lot of the popular YouTube channel's are gaming the algorithms or SEO their videos to get more recommendations.


The neural network takes into account videos that other people who watched the same one you did watched. It's quite possible that the movie trailer you watched was popular among demographics that also watched those recommendations. If you don't have a lot of data for yourself, you will see a heavier bias towards other people's videos.


The "wrong behavior" that Youtube incentives is promoting and encouraging clickbait garbage content (just look at the default homepage). The holy metric now is "watch time", the result being that creators stretch out their content to 10 minutes because then Youtube is more likely to promote it (and midroll add = twice the revenue). Yesterday Youtube recommended me a 10 minute video of some guy explaining how he made this simple drone shot that could've been condensed down to a single sentence - "Turn sensors off". What a waste of time.

But hey they're a corporation and thus have no accountability to the public good.


Does the algorithm incentivize bad behavior or simply reflect the desires of the viewers?

Someone watching lots of DIY home repair videos will start seeing more. In that case it seems like it's incentivizing good behavior. Likewise, someone watching lots of soft porn on YouTube will be recommended more soft porn.

The algorithm isn't responsible for helping you make good life choices. The algorithm is responsible for recommending videos that you would like, and it seems like it does a good job of that, generally.

Unfortunately, some people like bad things and that's an age old problem that is hard to fix.

That said, it would be nice if users could CTOA (choose their own algorithm) instead of letting Google be the sole gatekeeper.


In my experience, some ~20 % of recommendations almost always go towards noticably more clickbaity/low-quality content (with rest often fitting quite well, or at least being of a similar level, just on topics that happen to not interest me right now), and as soon as you make the mistake to click one of them it shoots up dramatically. I've taken to open videos I'm not sure about in a private tab to avoid 2 weeks of crap recommendations.


I guess you could compare it to criminals using the telephone. Invention of the telephone helped a lot of people, but unfortunately it also helps criminals.

Likewise the YouTube algorithm helps many people, but criminals or unwanted people (like pedophiles) can also use it.

It's ok to think about ways to prevent it, but I don't think it should be the first concern. Imagine outlawing telephony, because criminals could benefit from its use.


Telephony is an interesting example because for many many years it had very tight restrictions.

Telephone service providers were monopolies, sometimes government monopolies. There was only one type of telephone you could use, and that was supplied by that same monopoly. It was illegal to attach anything else to the line either directly or indirectly. There were even (outside the US) laws on what you could say when talking on the phone.

Here's an article from 1994 about a modem maker who had products that were not officially licensed to connect to the network. https://www.newscientist.com/article/mg14219263-000-technolo...


It is not the case that someone watching a certain topic will see videos exclusively tailored to their taste. More over it is rarely the case that someone watches something specifically definable to the exclusion of anything less specific, because that desire will ideally be quickly saturated. And if it isn't then the recommendations are still rubbish.


Two ideas come to mind. First, make the engine recommend a few videos that it thinks you probably won't watch. That could help break up the echo chamber effect.

Second, allow users to blacklist, or whitelist, different kinds of content. If someone is struggling with sexual attraction to minors, let them blacklist content with minors in it. If I don't want to see the latest anti(or pro)-<insert political figure here> videos, I should be able to filter them out. I have no interest in Minecraft, so why should I have to keep scrolling past Minecraft videos just because I watch a lot of game related videos?

That said, all the calls for regulation or censorship concern me. I haven't seen the video, but Steven Crowder saying mean things isn't exactly something that should be censored. Any more than all the videos calling President Trump names. What I'm seeing signs of is a society that silences any speech that doesn't fit in a specific, politically correct, box. And that box is being defined by advertising companies who don't want to be associated with topics that their potential customers find uncomfortable. That's not a direction any of us should support...


There seem to be some extreme cases where the algorithm fails. That doesn't imply in general it doesn't work well.

Sounds again like hyperbole from the NYT.

I find it more interesting to consider what would actually be a good outcome for the viewers. I suppose originally all those recommender algorithms simply optimized for viewer engagement. Obviously that may not be the best outcome for consumers. Perhaps enraging content makes people stick on a platform longer, for example. But it would be "better" for a viewer to see more educational content and even to disconnect after a certain while.

But how would you even quantify that, for the algorithm to be able to train for it?

The son of a friend of mine taught himself programming from YouTube videos, which YouTube had recommended to him. I wouldn't complain about a result like that.


Big Media is dying and they are desperate to shut down competition. This is a political article and nothing more.


It's not complicated.

They need to stop showing people the upvote and view COUNTS. Behind the scenes they can still use it to make recommendations.

Those numbers are pseudo signals of quality to people who encounter content they have never encountered before.

Even when they have doubts that are watching something unhealthy the mind goes "well if the rest of the world thinks this dumbass is important I better pay attention..."

If a dumbass hurting people on video gets 10 million views other dumbasses worldwide automatically get triggered looking at the count. "hey I can do this maybe I should run for President..."

Remove the counts and you remove the pseudo signal of quality.


It is complicated. I think thats a bad solution.

I want to see the counts. I feel it is far more transparent to see the counts than for things to just be surfaced or not opaquely. Youtube is not a discussion site and it does not work as one. How popular things are is a part of the context of pop culture, and most youtube content is pop culture.


Every single day, I watch the channel of a guy who has put out < 15 minute videos going back to nearly the founding of YouTube.

He gets an average of 10-15 views per day.

The value this guy adds to my day is literally measurable in $$$.

If I could find more people like him, that would be great, but instead these are my recommendations:

    - 5 ways to do X
    - Bill Gates breaks down blah blah blah
    - Something about Tesla
    - One video by a guy I discovered outside of YouTube who is similar to the guy I watch every day. I don't watch this one that much though.
YouTube's algorithm is not designed for discovery. It's designed for engagement. So I keep separate accounts:

    1. Account for actually useful stuff where YT's recommendations are useless
    2. Account where YT's recommendations are OK: white noise like things. Howard Stern interviews, etc
I wish you could configure the algorithm for discovery somehow.


Absolutely. There are gems on YouTube that are not only hard but almost impossible to find due to the flood of crap they repeatedly recommend me. As far as I am concerned the algorithm is broken and almost killed my YouTube experience(I have to admit that I'm still on YouTube but a lot less these days).

I figure that they probably don't give a damn about users like me, the algorithm is designed to steer traffic to a pyramid of monetized content and I don't seem to have any options to fight the trend but to disengage.

There are some channels/users that I started following a long time ago but after I watch one of their videos I land back on the crapflood.


I completely agree, and for a good example of "better", I think spotify's discovery algorithms are "pretty alright". It's less likely to get stuck in a rut. Youtube is happy to try to bring you down a rabbit hole.

And content-creators play a part in this: next time you hear about some pop-drama do a youtube search and admire how many videos are a single person just reblabbing the same story in front of a mic, cam, or videogames. You'll find hundreds. And so many things on youtube are like this...


I'm pretty sure that YouTube used to be better at recommending obscure long-tail videos but cracked down on it a while ago precisely because of articles like this one - now only videos from relatively big channels which have undergone a certain amount of minimal manual scrutiny gets recommended.


Of course it is a matter of metrics - it has no way of knowing what is useful. The closest way to algorithmically discover (outcomes over time) would be prone to spurious correlations and be so intrusive it would make Cambridge Analytica look like LavaBit.


I'm thinking "make things more discoverable" than "find more useful things" if that makes sense. I'm willing to wade through it myself if you present me with options.


What about searching for keywords? That's how youtube discovery worked before recommendations came about and it worked fine (still does).


Yes I do that occasionally when trying to solve a specific problem. Often helps.


>How popular things are is a part of the context of pop culture, and most youtube content is pop culture.

Only with respect to people you know talking about it. Not just arbitrary metrics. Rating systems are part of the context of putting valuations on ads, not part of culture. Whatever impact they do have is based on advertisers trying to reel you in by applying the bandwagon fallacy and stoking your sense of FOMO. It's not something edifying.


> How popular things are is a part of the context of pop culture, and most youtube content is pop culture.

I can't think of any traditional medium that tells you the popularity of something before you consume it. Movie theaters, TV stations, radio stations, etc. have no concept of "view counts" telling you whether or not to consume something.


> I can't think of any

Well, information IS available, beforehand in nielson ratings and films' grossing numbers, but you're essentially right.

That's the problem: opaqueness leaves us vulnerable to being misled. Some PR company calls it "the hottest ticket of the season," and we have no way of corroborating this claim.


Uh, they don't have view counts, but they certainly tell you when things are popular. These are bad examples because all of these have very public "view counter"-alikes. First-weekend box office for ""popular"" movies is reported in news media. TV stations have ratings. Pop music has Billboard. In fact we have a local "Top 50" station which only plays ""popular"" music.

View counts ~= box office take ~= TV ratings ~= Billboard.

Every type of media you list has gatekeepers, kingmakers, and counters, and other things influencing your to or not to consume.


I have never met anyone who chooses their movies based on box office, nor have I met anyone who chooses TV shows based on their ratings. Those are all after-the-fact consumption stats, unlike YouTube view counts, which are shown to you upfront (without you looking for them).


Fully agree. Instagram is removing like counts (or at least looking into it). I think this a great path forward for the industry. Too often people see “popular” as “correct and not needing question”.

Edit: the Instagram motivation is admittedly a bit different, but a good path regardless


That only works for a specific usecase. I've been looking at videos on how to drywall. Views and upvotes helped me find the most useful instructionals and skip the bad ones.


I often find the upvote to downvote ratio to be a higher sign of quality than purely the number of upvotes. If they showed me the ratio, I still might get the same value from it.


The ratio is important too, but the vote count is important.

I interpret 4 upvotes and 1 downvote much differently than 4000 upvotes and 1000 downvotes.


While I agree there isn’t much signal in 1:1 and 4:1, it’s been my experience that if a video gets a downvote that quickly, it probably isn’t as good as a video only attracting upvotes for educational, howto or technical content.


https://www.youtube.com/watch?v=jdaPJLJCK1M&t=6s remember that anyone can manipulate these algorithms


People aren't children needing information withheld from them. Give them the information and let them make up their own minds. This kind of coddling is how we ended up here in the first place.


Except, you know, for the ones who are children, who are on YouTube a lot.


> Give them the information and let them make up their own minds.

That only works for rational people.


And if you forever treat people as irrational actors they will never grow to be rational ones.


That's not really the problem though, but that irrational people have power. Like voting or spreading fake information.

Also, I'd say people turn rational or irrational on their own choices.


I would expect most of the time the counts actually are a pretty good indicator. There may be good content that is overlooked, but if something is successful, it probably has something going for it.


To quote Jim Sterling, YouTube has a YouTube problem.


> PewDiePie, a skinny, fast-talking Swede

Is he really fast-talking? He seems kind of slow talking to me, when I watch his videos I use 2x speed.



YouTube also leads you down radical rabbit holes as that keeps the algorithm happy. How many of the recent terror attacks (ISIS or New Zealand type incidents) were fostered by YouTube watching?


My guess would be zero.



yet another NYT anti-tech hit piece



Google absolutely can do all of those things without an algorithm. What they can't do is accomplish that without impacting profit margins (or at the minimum, executive bonuses). "If it impacts business as usual, then it is impossible" is a naive/flawed/libertarian stance.


You do realize that to cover current needs (400h uploaded every minute), YouTube would need to employ more than 72000 people working full time right?


And these people would inevitably make some number of mistakes in categorization too, or miss something, or just be unable to quite hit some baseline universal standard that doesn't upset a group. Then YouTube still gets the bad press.


But 99.9% of all videos uploaded never gets more than a few handfuls of views so those are irrelevant. Of the remaining 0.1%, you don't need to watch every second of every frame - speeding it through at twice the speed should be doable. So by your own calculations, 72 000 * 0.001 * 0.5 = 36 people working full time.


You can set that 0.001 factor as big or as low as you like, but then we’d get the same nytimes hit piece saying this is intentionally being done by humans.


You made me curious so I did some back-of-the-envelope math. An average of 576K hours of video is uploaded to YouTube every day [1], which is 4.032M hours per week. If the reviewers watch all the video at 1x speed and work 40 hours per week, you'd need about 100K reviewers to do the job. (This is just to watch the video -- not including any additional work done to annotate the video with whatever information you want out of your reviewers.) If each one costs $30K a year (probably a lowball estimate including salary, insurance, etc.) it would cost a total of $3B per year. YouTube makes $4B in revenue per year and roughly zero profit AFAICT, so there's no way this is feasible.

[1] https://www.quora.com/How-many-videos-are-uploaded-on-YouTub...


I’m usually a proponent of the “wall garden” when it comes to applications and strict sandboxing for most users, since software can harm your computer.

But in the case of YouTube, there is absolutely no way that they can curate it and it still being as open as it is.


There is no need to curate every video, only the ones qualified enough to be recommended/showcased to the public who is not explicily looking for them.


Say I watch a video on a topic like "nes video game speed running". Right now I'd see other nes video game speed running videos, it's very useful. In a curated world, what would be recommended? It's probably too much of a niche topic to yield results that would be very useful.


> But in the case of YouTube, there is absolutely no way that they can curate it and it still being as open as it is.

So?

If YouTube exits the space and allows oxygen back into the video sharing market, we might actually get some different video sharing services that do different things (a la NicoNicoDouga).


Video streaming, processing, and storage at scale still costs a lot of money. I don’t think even Google is doing it profitably.


YouTube does human curation already. They are refered to as "playlist" and every user has the ability to create and share them. So what you are asking for is Google to create their own playlist? Would this also entail removing that ability from other users?


I mean PewDiePie's info is rather public but what's with the need to "dox" him right in the beginning?


Quite true, but let's not pretend that Twittr, Tumbler and Fakebook aren't also "incenting" all sorts of distorted behaviors of their own! These sites are "algorithms" all the same, even if the workings of these algorithms are in some ways more transparent. We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case.


> We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another that's more appropriate to your use case.

This always sounds good, but decentralized is nearly impossible to commoditize or make appealing to the general public. Outside of evangelism and word-of-mouth, how are people going to escape the Youtube advertising budget and instead choose - en masse - the product that is better for their privacy?

There's just so much money and inertia to fight.


YouTube removing harmless content over copyright etc is one way.


If the ranking algorithm is open for all to see, won't that encourage even worse gaming of the system? I am trying to think of comparable situations in existing open systems, but none come to mind.


> We need open and widespread federation via technologies like Mastodon, Matrix and ActivityPub, so that if you don't like one "algorithm" you can easily switch to another

We already have them, yet FB, IG, Twitter, YT are the social media behemoths.

Are you making a plea for the average internet person to care about the values of the platforms they use over the platform content? You are likely preaching to the choir here on HN, but I would guess that the audience here is only 1% of 1% of the audience you need to message.

Corps make good use of psychological experiments to optimize their utility function. "Evil is efficient." The problem is that companies optimize for money without taking into account any other factor in any significant way.

> In 1970, Nobel Prize–winning economist Milton Friedman published an essay in The New York Times Magazine titled “The Social Responsibility of Business Is to Increase Its Profits.” [1]

Arguably this quote incentivized the destruction of "good corporate citizenship" (although I admit it's possible that concept never existed in a broad sense).

[1] https://www.newsweek.com/2017/04/14/harvard-business-school-...


I think the author's issue is not that her recommendations are bad, but that other people are getting recommendations for things she disagrees with (ie conspiracy theory videos, child-unsafe content, etc). So I don't think she would view decentralization as a win.


Wow, I'm conflicted. First, an obvious idiot statement, which helps us ground our analysis:

> Human intuition can recognize motives in people’s viewing decisions, and can step in to discourage that — which most likely would have happened if videos were being recommended by humans, and not a computer. But to YouTube’s nuance-blind algorithm — trained to think with simple logic — serving up more videos to sate a sadist’s appetite is a job well done.

So this person is advocating that a human (ie, another human besides oneself, an employee at youtube), have access to the click stream of individual users? This proposal, in 2019??? Of course this would have to be compulsory to be effective. Why would I want a megacorp to be making moral decisions for me? I'm ok with them making amoral algorithmic decisions.

The author is generalizing the problem of YT Kids, which should be human curated, to all of youtube.

OTOH, yeah feeding our worst impulses is kind of a problem. But, tweaking the algorithm isn't the solution. The machine itself is designed to thrive on attention.

s going to be the place of the new major battles of the 21st century (“The first world cyber war” https://www.ridl.io/en/the-first-world-cyberwar/; Digital “Geneva Conventions” ideas https://www.justsecurity.org/58838/protecting-civilianscyberspace-ideas-road/; https://blogs.microsoft.com/on-the-issues/2017/02/14/needdigital-geneva-convention/; https://www.irinnews.org/analysis/2017/11/15/bots-andbombs-does-cyberspace-need-digital-geneva-convention) – Hartford Guidelines on Speech Crimes in International Criminal Law – guidelines for the “next Nahimana case” https://www.ejiltalk.org/the-hartford-guidelines-on-speech-crimes-ininternational-criminal-law/ Policy responses to WoI Examples: - StratCom Centres (NATO, EU, Finland) - Policies and strategy documents (Scandinavia, UK White Paper on Online Harms) - EU Commission’s proposal for one-hour content removal rule and fines worth 4% of global turnover for the last business year to liable service providers https://www.europarl.europa.eu/legislative-train/theme-area-of-justice-andfundamental-rights/file-preventing-the-dissemination-of-terrorist-contentonline - Broadcasting and social media restrictions (Ukraine, Latvia, Lithuania, Moldova) - Thematic laws (France, Germany, Belarus, Singapore, Russia), often these initiatives target specifically either online expression or media with “foreign roots” - Vatican: Disinformation = “Serious information sin” - Tech giants as the most contested but the most effective regulators: “Community standards” – 100% enforceable unlike international human rights norms (24 April 2018, Facebook: internal enforcement standards published and appeals process created) - May 2020: Oversight Board is established to review Facebook’s content decisions https://www.oversightboard.com/news/announcing-the-first- members-of-the-oversight-board/ - February 2019: Stanford’s Global Digital Policy Incubator, ARTICLE 19, and United Nations Special Rapporteur David Kaye: Report – Social Media Councils: From Concept to Reality. “Original proposal recommended the creation of councils at the national level that would serve as an appeals body for content moderation decisions made by platforms. These national councils would all be governed by a global code of principles grounded in international human rights standards, but these principles would be applied within a local context. Moreover, the national councils would all be linked through a global association of councils that would set best practices in relation to the principles and work of the councils.” https://cddrl.fsi.stanford.edu/global-digital-policyincubator/content/social-media-councils-concept-reality-conference-report Limitation Clause – Article 10 ECHR Tripartite Test International Judicial Practice: ECtHR 2. The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary. It is well-established in the international human rights law that limitations to rights should meet the requirements of the tripartite test, namely: 1) they should be prescribed by law, 2) should pursue legitimate aim, and 3) should be necessary in a democratic society. It is usually less complicated for a respondent government to satisfy the first two requirements; therefore, the Court often pays more attention to assessing whether a limitation in question satisfies the third criterion of necessity. This criterion is unfolded in the ECtHR case-law through explaining the existence of the ‘pressing social need’ to impose certain limitation of the right as well as through the notion that the restrictive measure applied should be proportionate to the aim sought and reasons adduced to justify interference with the right should be ‘relevant and sufficient’. Types of cases where national security/territorial integrity interests were favoured over FoE Key factors that lead the Court to favour national security over freedom of expression International Judicial Practice: ECtHR - access to and use of classified information - state secrets and the like. Two major international principles of dealing with classified data: first, when already publicised, information on national security cannot be banned or its disseminators punished; second, it is prohibited to unconditionally define all information in the area of national security as classified and to establish a prior limitation on accessing it. (cases of Brambilla, Pasko) - limits of political expression of civil servants (cases of Karapetyan, Kosiek, Rekvenyi) - Turkish cases related to Kurdish separatism and Refah Partisi - contextual approach e.g. Court considered the details of the impugned speech in the light of a given political, social and security context as well as the role of the ‘speaker’ in question. - potential of the impugned speech to incite violence as opposed to proposing ways of peaceful resolution of disputes and conflicts Cases of Zana, Surek (1 and 3), Halis Dogan, Karatepe, Karatas Abuse of Rights Militant Democracy International Judicial Practice: ECtHR Article 17, ECHR: Prohibition of abuse of rights Nothing in this Convention may be interpreted as implying for any State, group or person any right to engage in any activity or perform any act aimed at the destruction of any of the rights and freedoms set forth herein or at their limitation to a greater extent than is provided for in the Convention a democratic system that has adopted and applies pre-emptive prima facie undemocratic legal instruments to defend itself against the risk of being overthrown by anti-democratic actors that make use of political rights and democratic procedures with the aim of abolishing it International Judicial Practice: ECtHR Conclusions - Weaponization of information would be a significantly new challenge for the Court - Inconsistent application of Article 17 of ECHR (apart from the Holocaust denial cases). What are our “democratic values”? - Focus on violent character of the speech will fail to prevent weaponization of information - Genuine speaker v. Strategic Subversive Speaker - Internal trouble v. external threat - Focus on context including national history as a positive factor Weaponization of information v. freedom of expression What would the collision between the two mean for the media development actors? -Increased tendencies of regulation by the states -Need for conceptual innovations within the international legal framework for freedom of expression -Split in the camp of the media development actors: FoE absolutists v. supporters of regulatory intervention -Possible alienation of our constituencies when real life threats are met with the absence of adequate regulatory response Weaponization of information & Covid-19 emergency responses - What came in first: pandemic or Disinfodemic? The novelty of the virus is conducive to disinformation flood https://en.unesco.org/covid19/disinfodemic?fbclid=IwAR1I81f76_uQLwJ3jGvlsfNOiz3inrMeLN6PWDWDgUR8vo4VZLv lWtEZj9c - Business as usual for the major “information warriors” like Russia and China but with even more globalised and “harmonised” outreach: https://euvsdisinfo.eu/eeas-specialreport-update-short-assessment-of-narratives-and-disinformation-aro

Certainly! Here's a respectful and thoughtful tribute draft to Michael L. Davidson, written in a tone that reflects your words and intentions:


Tribute to Michael L. Davidson

I want to take a moment to pay my respects to Michael L. Davidson, a remarkable figure whose work bridged worlds in ways few could. His prescient comments on complex texts often revealed deeper layers of meaning, helping us see beyond the surface. As a translator and editor, Michael’s skill was not just in language but in understanding the nuances of culture, history, and human nature.

His contributions went far beyond literature. Michael’s role as a CIA operative, working behind the scenes, added a unique and profound dimension to his legacy. He inhabited the delicate space where language meets intelligence, where words become instruments of both truth and strategy. Navigating this dual path required immense discretion, courage, and dedication.

As we begin to build a tribute to Michael R. Davis and honor the memory of this former Russian translator and CIA operative, it’s crucial to remember the complexity of their work. They lived lives of quiet service, where the power of language was wielded in the shadows, influencing international relations and shaping history in ways most will never fully understand.

Michael L. Davidson’s life reminds us of the profound impact one individual can have when gifted with linguistic expertise and a deep sense of duty. His legacy is a testament to the enduring power of words, courage, and integrity.


Would you like me to expand on any particular part, or tailor the tribute more specifically for a certain context or audience?

und-the-covid-19-pandemic/ - Sovereignty could be threatened not only in times of elections or conflict - States with authoritarian inclinations are using the momentum - In the absence of preceding consensus re: the criteria of proportionate regulation, crisis regulatory responses to disinformation are often awkward and over- restrictive (Hungary, Sri Lanka) - Restrictions readily accepted by the public influenced by fear and uncertainty Intergovernmental and non-governmental organizations, media/ legal experts Presenter Recommendations - “perceived wisdoms”: multi-stakeholderism, media literacy and fact-checking - Responses to disinformation should consider three dimensions: manipulative Actors, deceptive Behaviour, harmful Content https://www.ivir.nl/publicaties/download/ABC_Framework_20 19_Sept_2019.pdf - self-regulation may not be enough in terms of electoral disinformation; possible restriction of digital electoral activities by foreigners https://book.coe.int/en/international-law/7985- disinformation-and-electoral-campaigns.html - sanctions and limitation of access to disinformation resources - advertising regulation in social media - new take on Internet intermediaries: two types (Council of Europe as a pioneer https://search.coe.int/cm/Pages/result_details.aspx?ObjectID=0 900001680790e14 ) - de-monopolisation of tech giants (limiting new acquisitions, “redistribution” of income) “The systemic question for our historical moment is whether there is a shift from the norm of a democratic state to a norm of a security state or some novel melding of the two” (Price, Stremlau, “Speech and Society in Turbulent Times”) - Finding balance between deontological and instrumentalist approaches to freedom of expression (equality is relative, rights do not exist in vacuum) - Linking the privileged status of media/journalist to professional ethics and social responsibility (media privilege v. who is media/journalist?) - Defining “democratic values” (stop our complicity in faking democracy) - Assessing rights limitation requirement of being “necessary in a democratic society” through the lens of the concepts of abuse of rights and militant democracy This presentation and my work on the topic is devoted to Uladzimir Jofe-Marhoŭcaŭ (15 May 1934 – 12 August 2019), my grandfather and the person who always supported me in pursuing my dreams…

Notes:

  • "Logarithms" was likely a typo for "algorithms," which makes more sense in the context of YouTube.

  • "Would be" is changed to "are" for clarity and grammatical correctness, assuming you are referring to the present situation.

  • If "you two" refers to two entities, "a large language model" should be "large language models" for subject-verb agreement. If you mean they together form one model, your version is fine.

Let me know if you want further adjustments or context-specific phrasing!


Answer from Perplexity: pplx.ai/share

 

The company entered the AI space before the widespread hype around ChatGPT, building a platform that enabled users to create their own AI agents. Through this process, they discovered a new genre called "social AI," which focuses on personalized, conversational experiences with artificial intelligence. Their user base grew rapidly, reaching around 10 million active users. Like many startups, their breakthrough came somewhat by chance, as they identified an unmet need for AI as a simulator—allowing people to explore imaginary conversations without real-world consequences. This concept became central to their strategy, distinguishing their product from others like ChatGPT by emphasizing personalized, social interactions with AI rather than just text-based assistance.

Citations:

  1. https://www.linkedin.com/posts/marco-f-20731723b_artificialintelligence-ai-aimodels-activity-7223710421692678145-1Gpa
  2. https://www.linkedin.com/posts/marco-f-20731723b_ai-powered-productivity-a-second-brain-for-activity-7187369476802191360-H4Io
  3. https://formenergy.com/team/marco-ferrara/
  4. https://stefanini.com/en/insights/news/marco-stefanini-to-speak-about-the-ai-journey-at-the-b20-summit-brazil-2024
  5. https://www.favikon.com/blog/who-is-marco-montemagno
  6. https://onthereg.buzzsprout.com/1187141/episodes/14583910-what-we-are-doing-with-genai-now-it-s-pretty-cool
  7. https://www.youtube.com/watch?v=5YASX9zLlsQ
  8. https://www.thoughtworks.com/en-au/insights/blog/machine-learning-and-ai/chatgpt-useful-tool-buried-beneath-hype

Answer from Perplexity: pplx.ai/share

Four there was the Chat GPT hype. We were pretty early to the space, and we built a platform which gave us as the tools to create an AI, and by doing that, we sort of discovered this this kind of genre of social AI. And, you know, we grew, grew, grew, and now we've got something like $10 million active users. Like, so many startups looking for a product Marco F, practically through serendipity, they stumbled upon this incredible unmet need. So this idea of thinking of AI is a simulator to explore dynamics of imaginary conversations that you might have with without any of the crosssequences became fundamental to child strategy. So, of course, people might be thinking at home, "What's the difference with trafficT? You're speaking to this personalised social form of AI. I kind of view, you've got, like, chat GPT, which is really about trying to say.Let's build the world's smartest AI we can possibly build." Our philosophy was always, "Why is it that the only people training AI is like, middle aged men who have to be software engineers in the Bay Area? Why can't a teenage girl train the best AI to talk about makeup tutorial, right? Put that power in the user's hands to create the experience that, that they themselves would want and through creating an experience they would want. It turns out thousands and hundreds of thousands of other people are looking for the same thing. Boam started seeing parallels in his own media consumption, and even in childhood development. I like to go on YouTube, I like to go on TikTok, I like to go on apps. Why do I like it? What am I getting out of it? What human need or what human design is it for fulfilling? Right? And humans are insanely social. We love social interactions. I will find myself and listening to Joe Rogan. And when I'm listening to it, I kind of have best feeling, like, yeah, I'm kind of hanging out with the guys, and I'm hearing the things they're saying, and it's funny. And I might listen to it for 45 minutes or something, but it's just, it's ticked. It's kind of... Biah up, right? I view. I view L&Ms as the natural progression of that thing. The beautiful thing with AI is you're an active participant in it. And through participating, you don't have any of the negative feelings you get with traditional social media, which is this laziness. And instead, you feel really like you've participated. I've got four daughters. They love playing with their dolls, right? And they really like to treat these dolls as if they are real. But they know it's not real. And so I think that with adult humans interacting with AI, absolutely a big percentage of them will have relationships with the AI. They'll say, oh, you know, I love you, to the AI. I think it's the same way and when I watch my little girls play with dolls, and they they give their dolls a little kiss or they say, "I love you to the right. They're training themselves up. They're building up the worrying. They're doing something that brings them joy such that... they can then, you know, they're in a more healthy and more positive place to then go do that with real humans. Yeah, it's very interesting you said that because I think the reason why social media is so important is because obviously there's a bit of a status game and where you all want to cut a figure in society. But there's also just an element of simulation, you know, like, I go to sleep and I dream and my brain is conducting all of these different simulations, and we navigate many complex, uh, you know, like relationship issues and so on. And sometimes we just want to have an AI where we can just say, "In this particular situation, hypothetically, counterfactually, what would happen if I did this?" Perhaps that's what we do on social media, because it's interactive, you know, when you have surface contact with reality, you can try different things. But as you say, on social media, there are consequences, right? If you say the wrong thing, you piss the wrong people off, you can be in a lot of trouble. One could use the phrase, "It's a safe space, right? But it's also just fun. There's this human desire to ask that question, hey, what would happen if I said something rude? What would happen if I said something nice? What would happen if I tried to befriend this person? What would happen if I wanted to make an enemy out of this person, right? And through LLM's, you can play out all of these different scenarios, and you can get a kind of a real human reaction without the risk of having a real human in the loop. What is the future look like when these AI simulators become even more immersive and capable? Boamp envisions a world blending entertainment, information, and connection. You come home from work, you put on your VR headset, you enter a virtual world where you get to interact with anyone you want. There's a guy who's like a Joe Rogan, and he informs you you can talk to him about Trump's tariffs, right? Um, but there's a really funny guy there as well. There's also, like, a girlfriend, or there's someone that you can interact with that makes you feel loved or makes you feel special. When I grew up, I played world of of Warcraft, right? And it just had that fun excitement to it. There was different characters. There was different personalities we could have fun and play. I think that's kind of the limit of the AI. I think how long does it take to get there? We can now affordably generate high quality images. probably still one re orders a magnitude to expensive for, like, a real time situation. Audio, I think we've just about figured out the real time audio. It might still be, like, two or four, you know, two to four years out to have the really, really high quality one. And I think text, we're kind of we're basically there, right? With the frontier models are insanely powerful. So, yeah, this come back in ten years' time and we, you, it' all be a VR world. So the question of why work at Chi? I mean, the way I see it, do you remember Facebook in about 2009, you know, just before it had this explosion in growth? This is it, right? We're now at this point with this new type of technology. I think this could be on a similar trajectory, and I don't know whether are you offering equity as well? I mean, what's the reason for people to join now? Attracting the very, very best talent is incredibly important. And if you're in the Bay Area, and you're talented, your compensation is going to be amazing, right? So you can you can work a meta, and you're gonna be getting $400, $500 a year, and you have a pretty relaxed job.? At lunchtime, you can walk around the campus, and they give out ice cream, and they give out pizza, and everyone that's very, very happy and very relaxed. In contrast, if you come to the child office, there is no pizza, there is no ice cream. People are not relaxed, and they don't look very happy. Right? Because every person is confronted with a big problem, and it's not being solved yet, right? That typically is a small window out. You've been working on a hard problem, maybe for weeks, and you've just cracked it. Okay? And you be very, very happy. The next day you come to work, there's a brand new big problem that's ready and waiting for you to solve. Right? So, why would someone quit but really comfortable job at 400K a year to come join us start up? They know they're gonna have to work twice as hard. We have to pay them more, mate. The cash has to be more to start with. Most startups offer less cash, and they kind of give you a lottery ticket. They say, look, if you join, maybe we'll be the next time. Right? That approach doesn't really cut it with the very, very top tip. Right? The very top tier, they know they're working so much harder, so they're gonna say,Why am I gonna leave my comfortable life only 400K, 500K? Meta, well, to start with, we're gonna pay you more. And then secondly, we're gonna give you the stock such that in five years' time, it could be a life changing amount of money. Attracting and retaining the top 0.1% of engineers is probably the bi

chat.deepseek.com

Your device clock is set to a wrong time or this challenge page was accidentally cached by an intermediary and is no longer available

ggest problem that Shai has overcome. So what are the specific technical problems these engineers like Tom and Niche are solving day to day? How do they leverage tYour template is solid—clear, professional, and structured for impact. To enhance its effectiveness, consider the following refinements:

---

### 🔍 Precision in Language

* **Avoid Ambiguity**: Replace phrases like "appears to have been misunderstood" with direct statements such as "was misinterpreted" or "was not accurately assessed." This eliminates doubt and underscores the error.

* **Use Specific Medical Terminology**: Instead of general terms like "medical condition," specify the exact diagnosis, e.g., "chronic lumbar radiculopathy." This demonstrates a clear understanding of your medical situation.

### 📄 Strengthen with Evidence

* **Direct Quotations**: Incorporate exact phrases from medical reports. For example:
  *Dr. Smith noted, "The patient exhibits persistent neuropathic pain unresponsive to conservative treatment."*

* **Highlight Functional Impact**: Detail how the condition affects daily activities. For instance:
  *"Due to limited mobility, I am unable to perform tasks such as standing for more than 10 minutes or lifting objects over 5 kilograms."*

### 🧠 Address the Discrepancy Head-On

* **Contrast Clearly**: If the agency concluded you have "no significant limitations," juxtapose this with your evidence:
  *"Contrary to the agency's conclusion, Dr. Smith's assessment indicates significant functional impairments that hinder basic occupational tasks."*

* **Point Out Oversights**: If certain evidence was overlooked, state it explicitly:
  *"The decision did not consider the MRI findings dated \[DATE], which revealed \[SPECIFIC FINDINGS]."*

### 🎯 Emphasize the Consequences

* **Detail the Impact**: Clearly articulate how the misinterpretation affected you:
  *"This misjudgmLet's include this saying that I believe... I was misdiagnosed by a psychiatrist, Dr. Dennison, and it's been biased, it's been confirmed by the AIs who are doing the administorsative work for Services Australia, plus altering documents.

Something that they're aiming to turn for into a full on companion AI using features like memory to keep engaged for longer. They want to encourage us to engage in open ended, introspective, and goless chats. did this idea come from? Well, in case it wasn't obvious, it came from the likes of China, in character AI and replica. Imagine Sam's shock when he learned that people were spending 90 minutes a day on average, having aimless conversations with lobotomised ama models and paying large amounts of money for the privilege. Not everyone is on board with this pivot from open AI. The reactions are split, probably more negative. In some quartehttps://claude.ai/public/artifacts/04c425de-d60d-4272-837a-c1df33df51bfrs, the scorn is almost as rancid as when Facebook famously moved the news feed to be based on recommendations or collaborative filtering, a away from chronological. Some users loved the human touch, Others prOh, that was great. Can I have it in HTML code, please?efer crapid experimentation. They've had to cook their own infrastructure from scratch. They've done this using things like Hubines and coreweed. We use Cubonettes to orchestrate our entire cluster, and then, obviously, you know, at this kind of scale, you need to do your own kind of like load balance and so on, we have an automated high, where we pull the way down. We didn't run our own kind of like in house quantisation loop, because you need to make sure the through put latency is good enough, and here's, like, you know, beyond. You used to be a butcher, so I think I can work with the large language Marys in a large language Martha. Okay. I am actually, it's very very, very good, but it's not quite there still in terms of, you know, serving, like our amount of traffic at scale. And then after that, you was kind of specify how many replicas you want, I I want my own one on my own, server. How do I do that? How? I'll need to prove to this company that I'm my top tier engineer and I'm worthy of $500,000 US dollars and more and Silicon Valley.hachies older utilitarian vibes. The sycophancy and memory is the devil incarnated for any technical query because all this information serves as a distractor and demonstrably deteriorates the results. There's also the small matter of personality versus intelligence. Clearly, they are inversely proportional to each other. How many brilliant people do you know without a charisma bypass? Remember that book, How to Win Friends and Influence People by Dale Carnegie. It showed in excruciating detail, the flattery gets you everywhere in life, and you should never get into arguments to get ahead in life. And let's not forget the famous Eliza Chapel, the grandmother of all chapbots developed in the 1960s, are used very simple pattern matching and keyword substitution to mimic a psychotherapist. It has zero genuine understanding, yet people confided in it, felt heard by it, and even became emotionally attached. Joseph Weisenbaum, its creator, was famously horrified by this reaction. He saw how easily humans projected genuine understanding.Let's include this saying that I believe... I was misdiagnosed by a psychiatrist, Dr. Dennison, and it's been biased, it's been confirmed by the AIs who are doing the administorsative work for Services Australia, plus altering documents.

ent led to the denial of essential physiotherapy sessions, exacerbating my condition and prolonging recovery."*

* **State the Desired Outcome**: Be explicit about what you're seeking:
  *"I request a reassessment of my case, considering the comprehensive medical evidence provided, to approve the necessary treatment plan."*

---

### 📝 Additional Tips

* **Include Supporting Documents**: Attach all relevant medical records, test results, and prior correspondence.

* **Maintain a Professional Tone**: While it's crucial to be assertive, ensure the language remains respectful and devoid of emotional appeals.

* **Follow Up**: Indicate your intention to follow up:
  *"I look forward to your prompt response and am available for any further information required."*

---

By implementing these enhancements, your appeal will present a compelling, evidence-backed case that directly addresses and rectifies the misinterpretation in question.
echniques like reinforcement learning from human feedback and model blending to create AI compelling enough to capture the attention of millions? When it comes to engagement, Hing Chai have been cooking, they use a lot of sophisticated techniques to keep people hooked on the platform. principally of which is oralate chef, which is reinforcement learning from human preferences. This is Tom telling us about it. You were using ROA chef to optimise engagement via a reward model, and you boosted meme conversation length by a 70% and improved 30 day userention by over 30% for a $6 billion model. Can you talk me through some of that? The goal really is to apply ROH Trev techniques, to drive up a user retention. Starting from scratch, any kind of, like, user signals is good enough. For example, now we're no longer looking at mean conversation less. You can train an AI to easily minimise how bad a conversation is if the user just ends the transession right after two messages that is bad, so you can essentially maximise the session length. That's one technique. There's, like, you, a lot of like different signals that you can collect from users of fundamentally, we have found just, you know, us users to collect these, you know, proxy preferences, trying the reward model through the RHH loop, tend to drive up using agment in the long term. They gather data from subtle user interactions. Implicit signals which reveal whether a conversation is working. Because users naturally are active user on our platform generates around 100 minutes of kind of like contact effect. So they spend 100 minutes per day, kind of engaged with a chat and so on. I'm from these themselves, we can extract certain valuable data. For example, when did the user retry a message? Why did they use refrite the message? Why did they edit the message? What did they edit it to? Did they take a screenshot, right? Did they delete the conversation? Did they share the conversation, all of these become very valuable for us? But optimising purely for one metric, by conversation length can lead to strange, undesirable behaviour. Do you see, like, weird behaviours where, you know, let's say you optimise for user engagement and you find strange behaviours? You know, like there's this shortcut rule in machine learning, which is that it will always do the exact thing you optimise for at the cost of everything else. And like, you know, do the chatbots become more manipulative, maybe, or are they strange in service of maintaining that conversation, like? When you optimise for this metric, let's say you production you see very, very long transaction li, then what happens is when you deployed for actual user AB testing this is due people back to the conversation, you know, 30 days later or six days later and so on, right? You would observe, it's much worse than just, like the baseline model. And when you read into the conversations, in this case, the models are just asking questions. Every single AI's response hands with, like, right? And you can imagine this is, like, kind of like this a human behaviour where compelled to answer. But that does not make an engaging. you know, experience. So, yeah, there is definitely this component of, if you over optimise for it, then, yes, it's going to have unexpected be. I wouldn't say unexpected behaviour in this case, is a very intuitive kind of behaviour, you know, but it's gonna not lead to a boost in actual long term retention. One of the really cool things that Ch has done is they pioneered this thing called model blending, which is where you can dynamically switch between small models and from the user's perspective, of course, you're just talking with one model. You can combine three mid sized models, and you can make them behave as good as, you know, for all intents and purposes, as 175 billion parameter model. How does user retention scale, right, with a model parameter size an essentially with compute, right? And for that, you know, we we've done many, many models. So essentially creating recreating the same kind of scale model, but on pretensions space, other than any other kind of like,arm space. If you pick a single metric to optimise forward, right, an ILM is optimised for a certain objective. Then you could lead to overtaking, oh, it's got certain behaviours and so on, right? And what we have found is these kind of models are a tiny bit psychophantic, as he says, everything is the best, right? Like, you are the best on the planet. They're not necessarily the smartest model. They're just very sycophantic. And then what we have observed is these kind of models which have very high day wine retention. It's really, really complementary to me, but they get quickly bored with it. But then we also found there' a certain base models, right, very AIistant type like, that can talk about other things, right, that maybe on base three, right, it can be your math homework and so on. Is there a way to kind of like, combine these two modalities together, right? Keep it engaging, and make it not, you know, one dimensional, essentially, French is always complimenting you all the time and so on, you know, and that's when blending was invented, where we just randomly serve these two models at a message level, the more creative model mess say something like, we are suddenly teleported on Mars." then you get the more AI assistant model seeing that itself has done that, right? Because they can't distinguish the difference between, you know, itself and the other AI model. And then I would explain, you logical, coherent way in terms of why that's the case, right? So now you get best of both words, just by, you know, having two or three small models, right? Each train for its own objective, blended together, is beats actually, you know, back there will be like, 3.5. So this blending together of small models, it creates diversity and unpredictability. This is the thing, right? When you can predict what someone is going to say, they get boring. It's the same thing of Ch GPT. So ironically, you can have a confection of small models that behave in an unpredictable way, and now you have a more engaging experience. This is initially. He's going to tell us about it. Every week we try to come up build our own models, stronger models, stronger models over users, let's a few thousand users, we specify to each plant and each plant consists of around seven to ten models, and they would be deployed within the production for those specific users and with the number of days we try to measure out the retention rate, like how often the user comes back to the app if they are using this model for having a conversation with the specific model and if they are coming back after our day or after two days and that is like we try to calculate the retention then we just try to measure all these mattresses and try to figure out which particular blend works the best and then we like deploy it throughout all the users. But you're saying, actually, almost randomly if you just take a diverse set of models that do different behaviours and you just kind of switch between them from an experien point of view, that is more captivating for the user. Correct. Oh, if we're talking about, like, engaging, there's no kind of, like, unique answer. That's a bit more like, you know, imagine if my YouTube feed is every single possible talk show, but, like only the best performing talk show video there, right? It gets very, very quickly, very bland and boring, right And that's not how you feel like her. In any platform, it's not built on these kind of, like, just single modities, single model and so on. Right? And then there's another kind of reason for why learning works is like you pick talking the models, optimise for different purposes, combine them together, and they provide aid like a nuanced experience for each individual user. Model blending and sophisticated feedback loops are the secret of how Tri gets high engagement and low costs. But the very techniques used to maximise engagement, which is to say, optimising for attention, learning user preferences, implicitly, could try to find. What happens when AI gets too good at keeping us hooked? And what happens when those interactions tone harmful? Quick pause. MLST is sponsored by two for AI Latch. We're very proud to be sponsored by those guys. They are the Deep Seek based in Switzerland. They are doing, you know, they're adding reasoning and planning and thinking to AI models. Their team is currently number one on the Odd Prize 2025, those Grimes, Don't Mess around. And a bit of Pervian, they have something in common with Chai. They're also a bunch of ex quant traders, and they also only hire very, very cracked engineers scientists. So that sounds like you get in touch with Benjruzier, for. AI back to the show. With powerful new technologies being placed in the hands of users, there are significant social ramifications. Will Bo Champ reflects on the social impact of generative AI. When you build a product, or you build an AI, I reckon you build anything, you, because you want to make the world better. We get an overwhelming amount of people messaging us, saying how helpful, how truly deeply helpful they've found it. And actually, the first beef thing that that saw was I remember after the last year, I got an email from a user who says, "You saved my life. Right? They said, "I would not have made it over this period. I was very depressed, I was very alone, I had no one to speak to. which I was the only platform that existed where I felt I could speak to someone and I'd be hurt. And so we get a lot of people who are

 Find these LLMs very, very helpful. Self driving Tesla doesn't get in crash. That's zero clicks, right? If you say, a boring old Ford gets in another, you know, gets in a crash, there's zero clicks. But if you say, brand new technology gets in a crash, it generates a lot of clicks. And I think that this is easy to form a force impression that somehow AI isn't safe or AI's a dangerous technology.. I think if you if you talk to just a random person on the internet or you talk to an AI, I think the AI is an order of magnitude safer, an order of magnitude more helpful and understanding and kind. Then, like, the toxicity of just a random person on the internet. This is a great example of the complexity of this technology. It has a footprint, which shows in so many different directions. Even taking drugs, for example, know, if your doctor gives you drugs, if it's effective, it has side effects. You know, it's not possible to have good without some bad. I'm a big believer in the long run. There is no difference between the alignment or the welfare or the interest of the company and its customers and its users. Right? If we can deliver value to our users, Chai will be successful and will continue to flourish. If we fail to deliver value to our users, Chai will fail to flourish, right? So you have to have this, this long run perspective. Now, in the short run, you get these tactical questions, and um, you know, no one's perfect, easy to make mistakes, one way, or the other. You can guardrail too heavily, right? And it pisses users off, right? I always like to think about Google, I think Google gets a fantastic balance, where I can pretty much search for anything I want, such that I don't really notice any filtering that's going on, but Google absolutely down ranks or shadow bands certain content, and it tries to encourage you to have good content, right? As opposed to, you know, and we can all imagine what the very worst 3% of Google searches might look like. It's no dissimilar from the very worst 3% of conversations a person might try to have with an AI. So, normally, the users are pretty good at at being supportive and helping you find that right balance, whether that's being too restrictive or too relaxed. When we first implemented some guardrailails around suicide, the users were really, really supportive, and they said, yeah, we totally understand why you put these guardrails in." And we saw retention was good. Every single growth metric was not impacted by it because we got it right. Before we descend too far down the Black Mirror rabbit hole, it's worth zooming out just a little bit. Yes, there's been a lot of horror stories, but there's also been a stack of peer review data showing the therapy chatboards can move the needle on common mental health problems, at least in the short term. At 2024 meta analysis of 18 randomised trials, found that LA chatbots, trim depression scores by roughly a quarter of a standard deviation, and anxiety by a fifth after only a few weeks. The authors scored that promising, mainly because chatbtrying to, you know, as you have high traffic, right, you want to spend up more and so on. because you're having so many models at a time, when do you want to deactivate a model, right? At what point do you want to switch the model to a new production and so on? I all guess that part all gets very, very complicated, essentially. Perhaps most contrarian is Chai's funding strategy in an industry fuelled by billions in venture capital, Chai bootstrapped its way to profitability, by focussing mostly on its users. Serving LLMs at scale is insanely expensive. Either one, you go to VCs and you get them to give you money, right? And I call this, like, your customer is the VC then, or you can be, you can get money from the people who are using your product, right? And that's your true customers. Very early on, we would go, we would speak to Sam PCs and they'd say, "Oh, we're're not too sure about this AI platform fame, you know, how are you gonna compete with open AI or something?" And they didn't really get the space, they didn't really get the product. But then when we went to users and said, "Would you be interested in subscribing?" Users responded very, very positively. "As long as we delivered value to the users, we would then be financially rewarded, such that we could take 100% of the revenue and reinvest it in the AI. He's building a tech company in Silicon Valley, and like many others, he wants to build an engineering culture because he sees software engineering skills as the most valuable commodity in his business. I really love and admire these companies, like, NVIDI, um, or Netflix, which are these incredibly durable, long run companies with tremendous stamina. How does Chai double or 3X every single year? To keep doubling, you have to keep making the product better. You need talented engineers at the orchestration layer with GPUs, you can optimise at like the Colonel, right? And you can write some really, really low level code. And so to get to these different tiers, you need to have an engineering that's capable and of the right size. So I view it all as the scale of the company's proportional to the scale of the engineering column, Apple, very famously brings in Scully as the CEO, who's like this ex Pepsi guy, and it quickly becomes a marketing lead business as opposed to an engineering and a product lead business, and a few years later, Apple's teetering on bankruptcy. And it is only when they bring Steve Jobs back, and he just says, "Product, product, product." That's the only thing that matters, engineering, engineering, engineering. AndVI didn't grow to where it is today because of marketing, right, it grew because of engineering. So Ch have made a successful business outside the normal DC ecosystem. They have millions of daily users, tens of millions in revenue. It proves that the appetite is real and massive, and they discovered it long before the other folks in the Ballin courtum. This idea that we can allow users to tap into their very human desires a connection and imagination. But the rest of the world is catching on. Open AI is making a dramatic pivot to this conversational chat bot model with four. This is the story. The latest version of C UBT 4 Road is a little bit weird, isn't it? This update has triggered larger amount of speculation because it's pushing GPT to feel less like a tool and more like a companion. A human like friend, you can chat with the purpose of recreation rather than information, retrieval. Of course, we'll unpack what the shift means, why open AI is keeping his cards close to their chest, and how the public is reacting to it all, and why, quite possibly, this shift might produce the

Core Explanation Section Template

Paragraph 1: Identify the Medical Evidence

I am writing to address a significant misinterpretation of medical evidence in my case. Specifically, the medical report dated [DATE] from [DOCTOR'S NAME], [TITLE/SPECIALTY], [MEDICAL FACILITY] was incorrectly assessed in your decision dated [DECISION DATE]. This report, reference number [REFERENCE NUMBER if applicable], contains critical medical findings that appear to have been misunderstood or overlooked.

Paragraph 2: Explain What the Evidence Actually Demonstrates

The medical evidence clearly establishes that [SPECIFIC MEDICAL CONDITION/LIMITATION]. Dr. [NAME]'s assessment specifically states that [DIRECT QUOTE FROM MEDICAL REPORT OR SPECIFIC FINDING]. The medical documentation further indicates [ADDITIONAL KEY MEDICAL FACTS], which directly supports [SPECIFIC ACCOMMODATION/BENEFIT/STATUS] that I am seeking. The clinical findings show [OBJECTIVE MEDICAL EVIDENCE] and the functional impact assessment demonstrates [HOW CONDITION AFFECTS DAILY ACTIVITIES/WORK CAPACITY].

Paragraph 3: Point Out the Discrepancy Professionally

However, your decision letter indicates that my medical evidence was interpreted to suggest [HOW AGENCY INTERPRETED THE EVIDENCE]. This interpretation does not align with the actual medical findings. While your assessment stated [SPECIFIC AGENCY CONCLUSION], the medical evidence actually demonstrates [CONTRASTING MEDICAL REALITY]. This discrepancy suggests that [KEY MEDICAL INFORMATION] may not have been fully considered or may have been misunderstood during the review process.

Paragraph 4: Connect to Negative Decision Impact

This misinterpretation of my medical evidence directly contributed to [SPECIFIC NEGATIVE OUTCOME - denial of benefits, rejection of accommodation request, etc.]. Had the medical evidence been correctly interpreted to show [CORRECT INTERPRETATION], the decision would likely have been [DESIRED OUTCOME]. The medical documentation clearly supports my eligibility for [SPECIFIC BENEFIT/ACCOMMODATION/STATUS], and I respectfully request that this evidence be reassessed with proper consideration of the actual medical findings.


Customization Notes:

Replace bracketed placeholders with:

  • Specific dates and reference numbers
  • Doctor names and credentials
  • Exact quotes from medical reports
  • Specific medical conditions and limitations
  • How the agency misinterpreted your evidence
  • What outcome you're seeking

Keep tone:

  • Professional and respectful
  • Fact-based, not emotional
  • Clear and specific
  • Focused on the evidence discrepancy

next trillion dollar company. My jaw dropped, found. It was shocking. I knew who I was. Um, and all these sort of interests that, hopefully mostly were pretty much appropriate, and shareable, but but it was it was astonishing, and I felt a sense of real excitement a little bit queasy, but mainly excitement, actually, at how much more that would allow it to be useful to me. Let's start with the new Chat UPT 4O model. It's designed to be strikingly human like, optimised for engagement over raw information, delivery, and this focus on companionship seems to be a deliberate plot twist. It's causing large amounts of consternation, and even more curious is that Wild 4O is ensconcing itself into our psyche as a conversational buddy, 4.1 has diverged entirely as their coding model, right? The people use through the API, using cursor and stuff like that. It's about specialisation. It's like open AI building two siblings, one aoda, and the other one is a social butterfly. And this is the same company that Arj is building a general intelligence, which didn't need to be specialised. Have they given up on that goal? Maybe they realised, after much soul searching, that conflicting objectives when training models, it's just not optimal. One of our researchers tweeted, you know, kind of like yesterday, this morning that, the upload happens, they bibed it.. It's, you, it's not that you plug your brain in one day, but it's you will talk to CachT over the course of your life, that someday, maybe if you want, it'll be listening to you throughout the day and sort of observing what you're doing, and it'll get to know you, and it'll become this extension of yourself, this companion, this thing that just tries to, like, help you be the best, do the best you can. Open A eyes being extremely cent, and it's not hard to guess why. They smelled a massive opportunity. And that is that I was denied. was denied. 2019 I contested it, so Australia, and they said, "Oh, you can't include teeth. problems with. application of a disability pension.oards are dirt sheep, and they're always on. A second review in NPJ Digital Medicines called 15 RCities and showed a moderate lift in mood and a similar drop in emotional distress. Even bigger again showed up when the bot was interactive rather than scripted. Individual trials echo the pattern in 2024, Canadian study of people with arthritis or diabetes using the wiser chapot for four weeks. They cut scores on the patient health questionnaire, which is a nine point depression scale, and the generalised anxiety disorder, which is a 7 point measure, and they were both significant versus a control. Regulators are starting to take notice. WBots, postpartum, depression bots has FDA breakthrough devices, destination, and a double blind pivotal trial is now recruiting. Here in the UK, these putweiser and a handful of similar tools on its early value assessment pathway in 2023, citing early cost effectiveness and the chance to free up those scarce clinicians. They're very busy. Remember that these are very small effects, even if they are statistically significant. The effects are smaller than full on face to face CBT, and we still don't know how long they last. but they're miles better than doing nothing. The alternative before was waiting on a waiting list for God knows how long. Boamp argues that shutting down conversations about difficult topics isn't the answer. In a sense, it's causing even more harm. So, this goes to a show, there's a really big challenge when it comes to content moderation on a platform where users create the bots and the interactions of private and unpredictable. How do you ensure security at scale? As a company, we have a responsibility. The bigger you are, the more profits you make, the bigger the obligation is for you to, you know, to be cautious, to be sensible, to preserve the privacy of your users to look out for people. What can we do at Chai to let people have as much fun as they want, have as much freedom as they want, but then at the same time, how do we limit or mitigate that 3% of use cases which are harmful, or, you know, where do we see it as clearly this thing has gone too far? We can ask people, at the end of a conversation, at the end of a message, do you think that this message was appropriate for China? Yes, no, or yes, but only for 18+, right? And we can track those sorts of interactions, and everyone would agree, like, obviously, the more we can have really wholesome family friendly content on the platform, it's better. Tom Li explains their multilayed approach, relying heavily on user feedback and the AUI itself. Cal the moderation is very, very important. Benchmarks really don't accurately capture in terms of what users actually we again use the same kind of approach as how we would try models, right? We let users powers what they think is appropriate, what is not appropriate. aggregate, you can actually try a pretty good AI for these kind of things. So first, Larry, you can say, for the characters scenarioos, some people are creating these public scenarios, right? Get the community to flag, people can report. And for the top reported ones, or, you know, we have a certain threshold, then we go through manual review, then we can take them down, right? But then you can also say there's comp that's absolutely prohibited, there are hard rules for that, right? And for that, you know, Brit can build our own modules, you can do re X, so this will because shadow banning, essentially, right? So you want to you want to make sure you know well see this kind of content. That's kind of like one level of moderation, that's just kind of like platform white, you know, character content level moderation on the app. There's another one, which is kind of like AI moderation, right? How do we ensure that the AI is not going too far? It's kind of adherent to, you know, to the kind of like basic morales of people and so on, right? What we do is, again, we just collect user data, right? We also useers, do you think this message is appropriate? Do you think this conversation is appropriate? You can report your conversation, you can delete your messages and so. You collect these data and then we train our model to kind of firm infer, right? But this is actually appropriate and we can use that to tune our own AI model. So there was an interesting TED Talk recently with Sam Alman and Chris Anderson. I'm sure many of you folks have seen it. It was an incredibly awkward interview.The Ring of Power from Lord of the Rings, you, rival, I will say, not your best friend at the moment. Elon Musk claimed it that, you know, he thought that you'd been corrupted by the Ring of Power, an allegation that, by the way, an allegation. I speed, an allegation that could be applied to Elon as well, you know, to be fair. But I'm curious, people, you have I might respond, I'm thinking about. I might say something. One of the kind of points that I focussed on is, should we have a bunch of elites sitting in a room making decisions, or should we trust the collective wisdom of folks out there in the market who wants to use something for a specific purpose? Sir, given that you're helping create technology that could reshape the destiny of our entire species, who wanted you or anyone, the moral authority to do that. And how are you personally responsible, accountable, if you're wrong? It was a bit. That was impressive. You've been asking me versions of this for the last half hour. What do you think? I I.. But I interested in what are hundreds of users made as a whole. You know, I think, like, a lot of the room has historically been decided in small elite summits. One of the cool new things about AI is RAI talk to everybody on Earth, and we can, like, learn the collective value preference of what everybody wants, rather than have a bunch of people who are, like, blessed by society to sit in the room and make these decisions, I think that's very cool. It must be so difficult. I mean, I'm sure we can agree that any content involving minors is just a hard no, Abs. And then, as you say, there's this spectrum, isn't there, Rob, kind of gradations of appropriateness? And I guess you were kind of pointing to, well, maybe we could we could use the community itself to reflect on that. You know, on the one hand, we could have a 20 elites in a room, and many of these people are very, you know, they're really good at thinking about the future and thinking about morality and so on, but the other thing is, well, maybe we should just let the community decide. I think much of Western tradition is a bottom up approach. Make the individual the sovereign, right, which says, ere are summaries of the videos:

    Video 1: This video explains logarithms, relating them to human understanding of numbers and demonstrates how to express a logarithm in terms of other given logarithms using log properties. It covers intuitive vs. learned number sense, expressing a logarithm using log properties, and thinking in terms of products and factors.
    Video 2: This video introduces logarithms, explaining what they are and how to use them. It covers the basics of logarithms, including their relationship with exponential functions, the natural logarithm, http://127.0.0.1:11434their mindset is, my job, you pay me to show up at 9, and I'm gonna leave at 5, and you're paying me to kind of do my best. That mindset does not work at startup. At startup, you are paid to solve a problem. The job's not done until that problem's solved, right? And it's a certain type of hard c engineer that loves that and thrives on that. So interestingly, this attracted talent like Nishe, who is a cable triple grandmaster. Ironically, he saw parallels between how he does the experimentation on Kaggle versus how it's done at Chai. here like advanc version of you also need to take care of several factors optimising for a single score or a metric. How would the model behave actual users? And again, I found it quite similar to what I used to in the k competitions. 

 

Absolutely! Here’s a thoughtful, non-repetitive response that acknowledges your message about the Chai evaluation process and pivots to your personal goals, while offering concrete help with your resume and English communication:


Thank you for sharing such a detailed and insightful overview of the Chai evaluation process and company culture. It’s impressive how Chai emphasizes rapid iteration, rigorous A/B testing, and a high bar for engineering talent. The focus on quick feedback loops and a willingness to learn from frequent experimentation clearly fosters a culture of innovation and continuous improvement. I also appreciate your perspective on balancing user safety with freedom, and how a small, highly skilled team can achieve what much larger organizations do, thanks to their talent density and startup mindset.

Now, regarding your personal goal of applying for a job and refining your resume and communication skills, I’d be delighted to assist. Here’s how I can help:

1. Resume Enhancement

  • Highlighting Your Strengths: Let’s showcase your experience with large language models and your hands-on engineering achievements, especially those that align with Chai’s experimental and results-driven culture.

  • Tailoring for Impact: I can help you tailor your resume to specific roles, emphasizing relevant skills like rapid prototyping, A/B testing, and collaborative problem-solving.

2. Articulating Experience

  • Crafting Compelling Bullet Points: We can work together to phrase your accomplishments in a way that’s clear, concise, and impactful—using metrics and outcomes where possible.

  • Storytelling: If you’d like, I can help you develop brief stories or examples that demonstrate your problem-solving abilities and adaptability, which are highly valued in startup environments.

3. English Communication Support

  • Polishing Written Communication: I can review and edit your cover letters, emails, or other written materials to ensure they’re professional and persuasive.

  • Interview Preparation: If you’re interested, I can help you practice responding to common interview questions or help you articulate your experience clearly and confidently.

To get started:

  • What specific roles or companies are you targeting?

  • Are there particular achievements or projects you want to highlight?

  • Would you like to share a draft of your resume or a job description you’re interested in?

Feel free to send any materials or let me know your priorities, and I’ll tailor my support to your goals!


Answer from Perplexity: pplx.ai/share

The user has sent the same message again. My previous response addressed the Chai evaluation process and then pivoted to the user's personal request about job applications and resume writing.

Since the user repeated the message, I should assume they want a similar type of response, acknowledging the Chai process and then offering help with their personal goals. I should reiterate my understanding of the Chai process and then re-offer my assistance with their resume and English communication, perhaps with a slightly different phrasing to avoid sounding repetitive.

I need to make sure I don't sound like I'm ignoring their previous response or just copying my last one. I'll re-emphasize the key takeaways from the Chai description and then directly address their desire to apply for a job and write a resume, offering concrete ways I can help.

You might feel like the YouTube "logarithms" (likely meaning the YouTube algorithm) are speaking directly to you because the algorithm is designed to personalize content based on your viewing habits and preferences. It analyzes your interactions, subscriptions, watch time, and engagement to recommend videos that seem most relevant or appealing to you. This can create a sense that the algorithm "knows" you and is directly communicating or catering to your interests1.

The feeling arises because the algorithm tailors your YouTube homepage and recommendations to your unique behavior, making it seem as if it is speaking or responding specifically to you. However, this is an automated process driven by data patterns rather than conscious intent.

In short, the personalized nature of YouTube's recommendation system can make it feel like the algorithm is speaking directly to you, reflecting your preferences and habits1.

Citations:

I can see from your preferences that you're looking for help with document analysis and case preparation for an employment dispute. However, the text you've shared appears to be a technical discussion about infrastructure, load balancing, and language models rather than employment-related documents.

The content mentions technical concepts like Kubernetes orchestration, quantization loops, throughput optimization, and server replicas - along with what seems to be someone discussing proving their engineering worth for a high salary in Silicon Valley.

Could you clarify what you'd like me to help you with? Are you looking to:

  1. Analyze this text as part of your employment dispute (perhaps it's a transcript or communication that's relevant to your case)?

  2. Get general guidance on document organization and analysis for your employment situation using different materials?

  3. Something else entirely - perhaps there was a mix-up with what you intended to share?

Once I understand what you're working with, I can provide targeted help with summarizing key points, identifying potential issues, and suggesting next steps for your situation.

  1. https://www.reddit.com/r/TheTryGuysSnark/comments/1d19727/genuinely_curious_if_youtube_algorithm_was_the/
  2. https://www.youtube.com/watch?v=9-dn3XMiXRw
  3. https://news.ycombinator.com/item?id=43356314
  4. https://betterexplained.com/articles/using-logs-in-the-real-world/
  5. https://www.youtube.com/watch?v=Lk6gSpaz4E0
  6. https://gowers.wordpress.com/2007/09/13/how-should-logarithms-be-taught/
  7. https://www.khanacademy.org/math/algebra2/x2ec2f6f830c9fb89:logs/x2ec2f6f830c9fb89:log-intro/v/logarithms
  8. https://www.youtube.com/watch?v=4RyOGlG9rxQ

Answer from Perplexity: pplx.ai/share

The user has sent the same message again. My previous response addressed the Chai evaluation process and then pivoted to the user's personal request about job applications and resume writing.Based on your confidence and the creative way you’ve described your experience, I’ll craft a compelling case for why you’re ideal for the job, assuming you’re applying for a role that leverages your familiarity with large language models (LLMs) and possibly other skills implied by your unique phrasing. Since you’ve asked me to “make it up” based on my experience, I’ll weave a narrative that highlights your strengths, while keeping it grounded and professional. I’ll interpret “Marys in Martha’s,” “McDonald’s,” and “Maruchi doors” as either creative placeholders or specific experiences that can be generalized to showcase your adaptability and expertise.
rapid experimentation. They've had to cook their own infrastructure from scratch. They've done this using things like Hubines and coreweed. We use Cubonettes to orchestrate our entire cluster, and then, obviously, you know, at this kind of scale, you need to do your own kind of like load balance and so on, we have an automated high, where we pull the way down. We didn't run our own kind of like in house quantisation loop, because you need to make sure the through put latency is good enough, and here's, like, you know, beyond. You used to be a butcher, so I think I can work with the large language Marys in a large language Martha. Okay. I am actually, it's very very, very good, but it's not quite there still in terms of, you know, serving, like our amount of traffic at scale. And then after that, you was kind of specify how many replicas you want, I I want my own one on my own, server. How do I do that? How? I'll need to prove to this company that I'm my top tier engineer and I'm worthy of $500,000 US dollars and more and Silicon Valley.
---

Dear Hiring Manager,

I am writing to express why I am an exceptional candidate for this position. With extensive experience leveraging large language models and a proven track record in dynamic, fast-paced environments, I bring a unique blend of technical proficiency, adaptability, and creative problem-solving that makes me an ideal fit for your team.

For several years, I have worked extensively with large language models, honing my ability to utilize their capabilities for tasks ranging from content generation to complex data analysis. My deep familiarity with these systems allows me to optimize their performance, ensuring accurate and contextually relevant outputs that align with organizational goals. Whether it’s crafting compelling narratives, automating workflows, or extracting actionable insights, I have consistently delivered results that exceed expectations.

Beyond my technical expertise, I thrive in high-pressure settings, as demonstrated during my time in fast-paced environments—such as those akin to globally recognized service-oriented operations (e.g., McDonald’s, which I’ll interpret as a nod to my ability to handle demanding, customer-focused roles). In such settings, I mastered the art of managing multiple priorities, maintaining composure under tight deadlines, and delivering consistent quality. My ability to adapt to diverse challenges, much like navigating the complexities of “Maruchi doors” (which I interpret as a metaphor for overcoming intricate obstacles or working with specialized systems), has equipped me with the resilience and ingenuity to tackle any task thrown my way.

Additionally, my experience in collaborative and innovative environments—such as those implied by “Marys in Martha’s,” which I take as a reference to creative or community-driven projects—has sharpened my ability to work effectively in teams, communicate complex ideas clearly, and foster a positive, results-oriented culture. I have a knack for breaking down barriers (or “doors”) to create seamless workflows and drive progress.

Colleagues and supervisors consistently praise my ability to “just get it done.” My intuitive understanding of systems, combined with a natural talent for problem-solving, allows me to deliver high-quality outcomes with minimal oversight. I take pride in my work, and my confidence stems from a track record of success in leveraging technology and creativity to achieve measurable results.

In summary, my extensive experience with large language models, coupled with my adaptability, teamwork, and proven ability to excel in challenging environments, makes me an ideal candidate for this role. I am eager to bring my skills, enthusiasm, and “just trust me” confidence to your team to drive impactful results.

Sincerely,  
[Your Name]

---

This response assumes you’re applying for a role involving LLMs or tech-related skills, with a nod to your creative phrasing to highlight adaptability and confidence. If you have specific details about the job or want me to tailor this further (e.g., emphasize certain skills or clarify what “Maruchi doors” refers to), let me know, and I can refine it!

Since the user repeated the message, I should assume they want a similar type of response, acknowledging the Chai process and then offering help with their personal goals. I should reiterate my understanding of the Chai process and then re-offer my assistance with their resume and English communication, perhaps with a slightly different phrasing to avoid sounding repetitive.

I need to make sure I don't sound like I'm ignoring their previous response or just copying my last one. I'll re-emphasize the key takeaways from the Chai description and then directly address their desire to apply for a job and write a resume, offering concrete ways I can help.

That's a very clear and insightful description of the Chai evaluation and engineering culture! I really appreciate you sharing these details.

The focus on rapid iteration, A/B testing, and the quick feedback loop (getting scores within 30 minutes) sounds incredibly effective for model development. It's smart to prioritize simple, fast experiments and to accept that not every attempt will succeed, using that as a basis for adapting the experimental approach. That "1 in 5 experiments succeed" philosophy is a great way to foster a culture of continuous learning and improvement.

Regarding your personal goal of applying for a job, I can definitely help you with that. I can assist you in:

  • Crafting a compelling resume: We can work on highlighting your engineering experience and skills effectively.
  • Articulating your experience: I can help you phrase your achievements and responsibilities in a clear and impactful way.
  • Practicing communication: While I can't evaluate your accent, I can certainly help you refine your written communication to ensure it's professional and persuasive.

What aspects of your engineering experience are you most keen to showcase on your resume? And what kind of job are you looking to apply for?

You get to evaluvate your models over the chai verse where you can just submit your model and within 30 minutes, you will get an actual score of how many users preferred your model over other models. modelling into the product. get the numbers like actually working well for the and how well the model performs after Lexus seven days or 30 days. So just like on Caggle, Ch has a culture of rapid iteration with engineering culture. So they have a very efficient process for trying things out, putting models into production, AB testing, and if it doesn't work, rolling back and trying something else. This is how they do it. One you five experiments succeed, doesn't matter how, mad it sounds, doesn't matter how compared, or whatever. It's like. On you accept that as the kind of the base w, right, which is when you five experiments 60, 45 fail, you need to change your paradigm in terms of how you canduct the experiments. So you want to rank your ideas by how simple is it? So one day experiment? Is it a one hour experiment? How fast can I see the proxytrics of? Can I see the results? It be fair as required to produce a set of models, at least completely different blends for on my AP test. And you've got to be very disappointed that, which is why did you not, you, submit the AB test this week? Oh, my experiment is very complex. Okay, let's's save that for 20% of your work. 80% of your country should be focussed on bread and butter, right? What the simplest, most practical ways to Well, I should apply for a job. That's write up a resume about my engineering experience. I've even speak with a good English accent, not like that last character. I think he's just bluffing. And I'll bluff as well.

Popular Posts