Hi, I'm Anthony and I'm a computer scientist
Hi, I'm Anthony and I'm a computer scientist
@email@example.com maybe he just loves Australian soft rock???
he also likes the Little River Band and the Bee Gees.
@firstname.lastname@example.org you might be interested to know that my 18 month old is a big fan of an 80s band from your homeland, Air Supply
@email@example.com oh wow, I’d never heard of that L*. I suppose such a short name is bound to be reused.
I was thinking about Dana Angluin’s algorithm, from 1987. Ancient computer science. The kind that youngsters ought to be taught, but rarely are.
variant* It’s a funny typo because Rivest and Shapire formulate some of their results in Leslie Valiant’s PAC framework
@firstname.lastname@example.org hmm yes I understand what you’re saying
@email@example.com Oh no! 🤞 that you don’t come down with anything too nasty.
I spent a fair amount of my spare time this week diving into some ancient computer science from the 1970s, 1980s and 1990s (!!!), specifically Dana Angluin’s L* algorithm for learning a finite state machine from an oracle and Rivest & Shapire’s followups and extensions. Quite beautiful work in my opinion.
L* is especially simple and elegant imo. Shapire’s valiant is more computationally efficient and I think grounded a bit better, but a little harder to understand.
Regarding “echo chambers”: this is a popular, popularized term. It makes some kind of sense to a person who thinks a certain way, but introspection or intuitiveness does not a fact make.
You can slant your intuition the other way if you like. The claim is that in an information environment with lots of specialized sources, people will seek out information sources that support, or at least don’t contradict, what they already believe. I.e., they will enter an echo chamber. But it is just as reasonable to believe that in an information environment with that much diversity, people will be exposed to a wide variety of ideas in spite of themselves, and people who actively seek out nuance won’t have any trouble finding it. Some people might get sucked into an echo chamber, but most won’t.
That’s just as intuitive a stance to hold.
It’s also the stance that seems to fit the data
Using a nationally representative survey of adult internet users in the United Kingdom (N = 2000), we find that those who are interested in politics and those with diverse media diets tend to avoid echo chambers. This work challenges the impact of echo chambers and tempers fears of partisan segregation since only a small segment of the population are likely to find themselves in an echo chamber.
Here’s a more expository account that surveys numerous data points; as the authors put it
A deep dive into the academic literature tells us that the “echo chambers” narrative captures, at most, the experience of a minority of the public. Indeed, this claim itself has ironically been amplified and distorted in a kind of echo chamber effect.
@firstname.lastname@example.org “being in an echo chamber that you aren’t aware of” isn’t a thing. Not for rational people. That’s not a real phenomenon. It’s as real as bogeymen, ghosts, demons….
@email@example.com more like they draw them into a space where their worst inclinations are reinforced. Pushback only reinforces beliefs in some people.
The notion that reasonably well-adjusted people who mostly read stuff by other reasonably well-adjusted people are somehow at risk of some ill-defined “echo chamber” effect is bunk. Folks tend to seek out information and adjust their own notions accordingly, unless they’ve been “info poisoned” for lack of a better term.
Indeed! It comes to mind the popular saying, “How do you deal with nazis? — You punch them in the face.”
One of my favorite animated GIFs depicts exactly that 😆
A little less violently, deplatforming works. That’s been demonstrated time and again. It’s one of the many reasons to be alarmed by what Elon Musk is doing at Twitter, un-banning hateful accounts that had been banned previously. He is re-platforming people who don’t merit a platform, and he himself is amplifying them.
@firstname.lastname@example.org @email@example.com I’m blanking on where I first read it–might be Jared Yates Sexton, or maybe Sarah Kendzior–but I’m of a mind that sentiments like “debate is the best way to resolve disputes” are kind of nostalgic and naive because they ignore the conditions we’re currently living in. Sure, if we lived in a healthy society with a healthy information space, widespread respect for differing points of view, a relative lack of suffering, etc etc etc, then yes, maybe that would be true. There were points in our history (I’m speaking of the US because that’s where I’m from) when we approximated those ideals, at least for some people, and many of us aspired to perfect them. But today, in 2022, we do not live in those conditions. There are many people who actively want to destroy any progress towards these ideals we’ve managed to make, and who actively, publicly advocate for going backwards from there. Debate is no longer the best way to resolve disputes, in these conditions, not with people who are trying to force the world backwards.
It is foolish to think otherwise. It is just as foolish as believing water puts out all fires and throwing water onto an oil fire. You have to recognize the reality you’re living in, then choose the right tool for the job. If you’re living in a time where political violence is normalized/is being normalized and demonization is rampant, and you’re facing a bad faith argument from a bad actor who is preaching something like antisemitism, you don’t reach for “debate” as your tool of choice. You reach for “deplatforming” (for example), because that demonstrably works. You take them, and their damaging ideas, off the public square completely and keep them out of it.
the point on debating in social network, is not stopping people from spreading bad ideas. Is to make everybody else that look at the debate think, and not fall on those bad ideas, by hiding the bad ideas, and not debating them, we may push others people to believe in them, and we may push people that already believe in them to stay in an echo chamber
No. This is a naive point of view, and it does not jibe with current research. Really. I urge you to read up on disinformation research especially after Facebook was called out for the Cambridge Analytica scandal. Other people do not look at a debate, see the bad information exposed as bad by good arguments, and change their minds. It doesn’t work that way. Misinformation purposely targets people’s emotions, and when the emotional appeal works, they tend to view the people debating against the view as enemies. They reject the good ideas even more forcefully.
Sure, there are hypothetical people who will see a debate, recognize that bad information has been exposed, and react by rejecting that bad information. Probably most of the people here fall into that group. But people like that were never the problem. The problem is the vast number of people who will react by believing the bad information even more stubbornly. Read the research–this is a real, documented effect I am describing.
Also, the dangers of the “echo chamber” that you evoked are very much overblown, almost surely by purveyors of disinformation because that fear helps them do their work (I’ll note you raised this as a danger–an emotional appeal–instead of citing data). The echo chamber effect, to the extent it exists, is bad for people who are already suffering from information poisoning. People who’ve already bought into some piece of misinformation fall into or stay in an echo chamber. Once again, misinformation purveyors have very detailed strategies–Google, you can find them–for how to draw unsupecting people into an echo chamber and keep them there.
@firstname.lastname@example.org @email@example.com @firstname.lastname@example.org I think we cannot ignore the fact that there are nations with “cyberwarfare” divisions. Hundreds, possibility thousands, of people who sit in rooms all day every day–it’s their job–doing nothing but creating and spreading what we call “misinformation” or “disinformation”. That is a very different phenomenon from ignorant people spreading beliefs that happen to be dangerous. It is an explicit attempt to cause harm. Social media sites have been horrible conduits of this, but misinformation circulates many ways, including through trusted news media.
One aspect of cyberwarfare that information warriors take advantage of is that well-meaning people spread the bad information by reacting to it. Misinformation tends to target the emotions, and receptive people (which is all of us, basically) react to it on an emotional level. However, well-meaning people tend to react to the logical content of the information. They debate the facts being presented, or they attack the logical structure. But this functions to reinforce the bad information in people who react emotionally. In other words, the process of debating misinformation functions to reinforce it. Bad actors know this full well. I’ve read training materials for spreading misinformation–they know exactly what they’re doing.
I don’t know what the answer is, but we can’t be naive and think that just by “debating” we are going to stop people from spreading bad ideas. That’s like throwing water on an oil fire–it makes it worse, not better. We need to be better equipped than this.
@email@example.com no, I disagree with you. It is quite different.
Yes, the device might have an impact on the child. Of course, that’s obvious.
But we’re talking about creating a dossier that is on the internet, available to anyone who looks, and that modifies how the child is perceived by countless people before they are able to give consent for that kind of crafting of their image.
You may not care about either of these in the ways that I do, but you have to admit they have very different impacts on the kid.
@firstname.lastname@example.org the thing that bugs me is that social media allows kids to build what might be a permanent, uneraseable image of themselves online before they’re fully aware of what the lifelong consequences of that might be.
I feel like every day more and more stories like #abxlsba come to light about him. He’s just an awful person.
when I say “destroying things”, I mean stuff like this: https://gizmodo.com/silicon-valleys-transportation-failures-tesla-waymo-bir-1849382788
So the Hyperloop, for example, he admitted to his biographer that the reason the Hyperloop was announced—even though he had no intention of pursuing it—was to try to disrupt the California high-speed rail project and to get in the way of that actually succeeding.
In other words, Musk explicitly, consciously killed a high-speed rail project, and probably made off with some state of California funding in the process. When we wonder why we have lousy rail service in the United States compared to Europe for instance, it’s partly explained by people like Elon Musk.
Con artist through and through. It’d be pathetic if it weren’t destroying things.
@email@example.com he grew up in apartheid South Africa to parents who made their money from emerald mining. He’s long been part of “the PayPal Mafia” that includes outspoken bad actors like Peter Thiel and David Sacks. He’s always been this way. He’s a narcissist and has carefully crafted a cult of personality and legion of fanboys and followers who launder his reputation on his behalf.
I think, as a matter of self protection, we collectively need to stop idolizing rich tech people. They are, almost to a one, bad actors and not worthy of our time let alone our adulation. Given the opportunity they will do bad stuff. Just think of all the people over decades, like Bill Gates, Jeff Bezos, and Elon Musk who initially were propped up as some kind of unsung tech genius, only to finally reveal themselves as nothing more than greedy money hoarders who won’t hesitate to harm people. This is a feature, not a bug, and we need to be better at identifying it sooner.
@firstname.lastname@example.org I didn’t think too much of him previously; didn’t dislike him any more than I dislike billionaires generally. He just seemed like a run of the mill rich guy con artist who happened to appeal to a certain type of tech guy. But, the other day he tweeted something about his first born child dying in his arms and how he felt the last heartbeat. But then his ex-wife tweeted that no, in fact the child had suffered from SIDS and died in her arms. And that soured me on him completely forever. He’s the sort of self-important narcissist who would make up something about a dying baby to try to score points in an online argument. That’s so pathetic and contemptible.
Incidentally, this is the line of thinking you seem to be siding with contra my own:
Not a good look imo
Before any free speech absolutists dive into this with the free speech stuff, please be aware that we are being inundated with misinformation spread, especially through social media, by bad actors who are doing this purposely, with great effort and effect. This issue is not about individuals being able to freely discuss ideas. This is about protecting ourselves from bad actors who have dedicated enormous resources to poisoning the information space so that we are unable to debate ideas on the merits anymore. Once you come to believe that misinformation is not about people holding bad ideas inadvertently but is about bad actors attempting to harm people, you take a different stance. That is the stance I hold.
@stigatle It’s been demonstrated that content warnings like this save lives. Are you really trying to say that you think people “being able to talk” is more important than saving lives?
You’d let them shout “fire” in a crowded movie theater? Because this has been litigated already in the US, and it’s illegal.
Today’s “twitter is becoming a dangerous shell of its former self” news.
The 4channing of Twitter continues…
@email@example.com so, warum/darum is what you’re saying
@firstname.lastname@example.org summer and the morning? it’s winter and dark night here!
Twitter is 4channing fast. Letting the worst of the previously-banned far right back on, suspending or permanently banning activists and journalists.
@email@example.com very nice!
@firstname.lastname@example.org 20.jpg and the train pics are 👌!
@email@example.com I think I like that choice? Not sure yet since I haven’t tried using it for anything real. I do like the fact that it doesn’t introduce a bunch of weird files to your repository that you then have to worry about managing.
@firstname.lastname@example.org it’s hilarious that it’s called “semantic” web when nobody knows what it really means (“semantic” being the word for “meaning”)
remote: Updating references: 100% (1/1) To $REPO remote: Updating references: 100% (1/1) 19cf0dc6b52363cf5b8032755b16a5 -> refs/identities/af97ed38e619cf0dc6b52363cf5b8032755b16a5remote: Updating references: 100% (1/1) To $REPO * [new reference] refs/bugs/00fd29b9f50294a64ad72c039a7340b5863d7907 -> refs/bugs/00fd29b9f50294a64ad72c039a7340b5863d7907
So it puts stuff in
$DIR/.git/refs. It creates a cache directory too.
I have to say, it’s surprisingly full-featured given that it’s pre 1.0 and the main author warns that there be dragons here (though not so surprising given that there are over 2,000 commits!). You can do the entire create/label/comment on/push/pull/clear bug workflow entirely on the CLI with
git subcommands, which is how I’d probably use it were I to adopt this. The webui looks remarkably like github/gitea/etc if you’re into that.
Distributed, offline-first bug tracker embedded in git, with bridges
@email@example.com wow this is pretty horrifying. I didn’t realize it was becoming more common
@firstname.lastname@example.org hahaha 😆
@email@example.com hmm, that’s interesting. Many cell phone plans here include unlimited SMS free and a fair number of people use them still, at least among Old People like me.
@firstname.lastname@example.org yes, you used to be able to tweet through SMS. I actually did that a few times, long ago! It can be handy at times
An only 50% joking idea: an SMS gateway to post twts/yarns