In which I have strong opinions

1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna

I delete most of my posts after a month or so to keep my blog manageable and to organize my reblogs. You have my permission to reblog whatever deleted post I made. It wouldn’t be on the internet if I wasn’t okay with it getting shared.

Here are some of the major resources I’ve made and some of my sideblogs in case you’re looking for something that I reblogged, plus my answers to the tech questions I get most frequently:

Keep reading

Pinned Post pinned post
ms-demeanor
ms-demeanor

Okay so here is the thing, Tiny Bastard looks tiny, right?

image

You look at her and you go "wow, yes, that is a small dog alright."

image

She's a chihuahua mix, that must mean that she's little. She's a baby, so little, so tiny!

image

I often ask her myself: how did you get so small? Was it hard? Did it take a lot of practice?

image

(She does not like to be questioned).

And that's when you realize that most of the photos you've seen of her are of Large Bastard holding her.

image

She's as big as my torso.

image

She weighs almost twenty pounds.

image

All of her sweaters are medium at the smallest; in dog clothes sizes, she's a medium/large.

image

You have succumbed to the optical illusion of assuming the person holding her wasn't nearly seven feet tall.

image
ms-demeanor

Someone pointed out that I should add this photo. Extremely medium dog. Tiny Bastard at heart.

image
ms-demeanor

bastardly reminders.

ms-demeanor
ms-demeanor

I was out walking tiny bastard this morning when we saw a little girl and her grandma waiting out front of their house. Now, little kids are tiny bastard's favorite thing in the world so she wanted to say hi and the little girl was excited so I told grandma that tiny bastard was friendly and the little girl could pet her if she wanted to. She got nervous because tiny bastard was bouncing around so I crouched down to keep tiny bastard still and the little girl reached out and touched the end of my braid and said "look gramma she has blue hair" so I swept off my beanie and said "and pink!" Because my hair is still bicolored and you guys I have been riding high all day on that little girl shouting "AND PINK!" And descending into incoherent bouncing clapping glee.

ms-demeanor

For those asking to see tiny bastard, here she is, along with what my hair looks like.

image
image
image
image
image


I was out walking tiny bastard this morning when we saw a little girl and her grandma waiting out front of their house. Now, little kids are tiny bastard’s favorite thing in the world so she wanted to say hi and the little girl was excited so I told grandma that tiny bastard was friendly and the little girl could pet her if she wanted to. She got nervous because tiny bastard was bouncing around so I crouched down to keep tiny bastard still and the little girl reached out and touched the end of my braid and said “look gramma she has blue hair” so I swept off my beanie and said “and pink!” Because my hair is still bicolored and you guys I have been riding high all day on that little girl shouting “AND PINK!” And descending into incoherent bouncing clapping glee.

ultraviolet-divergence
probablyasocialecologist

There is no obvious path between today’s machine learning models — which mimic human creativity by predicting the next word, sound, or pixel — and an AI that can form a hostile intent or circumvent our every effort to contain it.

Regardless, it is fair to ask why Dr. Frankenstein is holding the pitchfork. Why is it that the people building, deploying, and profiting from AI are the ones leading the call to focus public attention on its existential risk? Well, I can see at least two possible reasons.

The first is that it requires far less sacrifice on their part to call attention to a hypothetical threat than to address the more immediate harms and costs that AI is already imposing on society. Today’s AI is plagued by error and replete with bias. It makes up facts and reproduces discriminatory heuristics. It empowers both government and consumer surveillance. AI is displacing labor and exacerbating income and wealth inequality. It poses an enormous and escalating threat to the environment, consuming an enormous and growing amount of energy and fueling a race to extract materials from a beleaguered Earth.

These societal costs aren’t easily absorbed. Mitigating them requires a significant commitment of personnel and other resources, which doesn’t make shareholders happy — and which is why the market recently rewarded tech companies for laying off many members of their privacy, security, or ethics teams.

How much easier would life be for AI companies if the public instead fixated on speculative theories about far-off threats that may or may not actually bear out? What would action to “mitigate the risk of extinction” even look like? I submit that it would consist of vague whitepapers, series of workshops led by speculative philosophers, and donations to computer science labs that are willing to speak the language of longtermism. This would be a pittance, compared with the effort required to reverse what AI is already doing to displace labor, exacerbate inequality, and accelerate environmental degradation.

A second reason the AI community might be motivated to cast the technology as posing an existential risk could be, ironically, to reinforce the idea that AI has enormous potential. Convincing the public that AI is so powerful that it could end human existence would be a pretty effective way for AI scientists to make the case that what they are working on is important. Doomsaying is great marketing. The long-term fear may be that AI will threaten humanity, but the near-term fear, for anyone who doesn’t incorporate AI into their business, agency, or classroom, is that they will be left behind. The same goes for national policy: If AI poses existential risks, U.S. policymakers might say, we better not let China beat us to it for lack of investment or overregulation. (It is telling that Sam Altman — the CEO of OpenAI and a signatory of the Center for AI Safety statement — warned the E.U. that his company will pull out of Europe if regulations become too burdensome.)

Source: undark.org