23
Unlimited power!!! (fediscience.org)
top 12 comments
sorted by: hot top controversial new old
[-] bufordt@sh.itjust.works 2 points 1 year ago

It's similar in IT. Almost no one recommends regular password changes anymore, but we won't pass our audit if we don't require password changes every 90 days.

[-] azertyfun@sh.itjust.works 2 points 1 year ago

Same vibe as management buying Oracle products because it's "trustworthy".

[-] bufordt@sh.itjust.works 2 points 1 year ago

When we first switched to JD Edwards, it still sent the passwords in plain text, and our Oracle partner set up our weblogic instances over http instead of https.

I had to prove I could steal passwords as just a local admin on a workstation before they made encrypting the traffic a priority.

[-] InfiniWheel@lemmy.one 1 points 1 year ago

A very non-techy relative works in a company that requires password changes every month. At this point his passwords are just extremely easy to guess and basically go like 123aBc+ and variations of it.

Yeah, no clue how that caught traction.

[-] DarkDarkHouse@lemmy.sdf.org 1 points 1 year ago* (last edited 1 year ago)

Our IT department won't allow password managers. Their current stance on what we should do instead is "Uh, we're working on it". So everyone at work uses weak passwords and writes them down in notepad. headdesk

[-] WagnasT@iusearchlinux.fyi 1 points 1 year ago

the only way this gets fixed is when the audits say to follow NIST recommendations.

[-] AlexRogansBeta@kbin.social 1 points 1 year ago

I feel this in my bones as an anthropologist when it comes to semi-structured interviews, which frankly have very little to do with anthropological inquiry but have nonetheless become a rote methodology.

[-] verdeviento@mander.xyz 1 points 1 year ago

๐Ÿ‘€ lookin' at you, alpha=.05

[-] smashboy@kbin.social 1 points 1 year ago

Oof, ouch, right in the psychology degree :(

[-] Hellsadvocate@kbin.social 1 points 1 year ago

It makes me wonder if we can create AIs that behave close enough to humans by adding an additional neurological baseline noise to the LLM training. Then throwing it in simulations to see whether social sciences might work. I'd be curious to see how true to life something like that would be as well.

A while ago, some researchers designed a game where chatGPT was assigned to characters and told to act and live like humans. It was interesting to watch. https://www.iflscience.com/stanford-scientists-put-chatgpt-into-video-game-characters-and-its-incredible-68434

[-] kaffeeringe@feddit.de 1 points 1 year ago

The history of IQ-tests...

[-] thanevim@kbin.social 1 points 1 year ago

Don't know if this is the intended reference, but this pretty much perfectly describes why we use the Polygraph. As covered (and better explained than I can myself) on Adam Ruins Everything https://youtu.be/nyDMoGjKvNk

this post was submitted on 03 Jul 2023
23 points (96.0% liked)

Science Memes

10853 readers
3199 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS