76
12

Low-quality image placeholders (LQIPs) have a solid place in web page design. There are many different solutions but the main gotcha is that generating them tends to lean on things like JavaScript, requires lengthy chunks of not-particularly-human-readable code, or other tradeoffs. [Lean] came up with an elegant, minimal solution in pure CSS to create LQIPs.

Here’s how it works: all required data is packed into a single CSS integer, which is decoded directly in CSS (no need for any JavaScript) to dynamically generate an image that renders immediately. Another benefit is that without any need for wrappers or long strings of data this method avoids cluttering the HTML. The code is little more than a line like which is certainly tidy, as well as a welcome boon to those who hand-edit files.

The trick with generating LQIPs from scratch is getting an output that isn’t hard on the eyes or otherwise jarring in its composition. [Lean] experimented until settling on an encoding method that reliably delivered smooth color gradients and balance.

This method therefore turns a single integer into a perfectly-serviceable LQIP, using only CSS. There’s even a separate tool [Lean] created to compress any given image into the integer format used (so the result will look like a blurred version of the original image). It’s true that the results look very blurred but the code is clean, minimal, and the technique is easily implemented. You can see it in action in [Lean]’s interactive LQIP gallery.

CSS has a lot of capability baked into it, and it’s capable of much more than just styling and lining up elements. How about trigonometric functions in CSS? Or from the other direction, check out implementing a CSS (and HTML) renderer on an ESP32.


From Blog – Hackaday via this RSS feed

77
12

Sometimes it seems odd that we would spend hundreds (or thousands) on PC components that demand oodles of airflow, and stick them in a little box, out of site. The fine folks at Corsair apparently agree, because they’ve released files for an open-frame pegboard PC case on Printables.

According to the writeup on their blog, these prints have held up just fine with ordinary PLA– apparently there’s enough airflow around the parts that heat sagging isn’t the issue we would have suspected. ATX and ITX motherboards are both supported, along with a few power supply form factors. If your printer is smaller, the ATX mount is per-sectioned for your convenience. Their GPU brackets can accommodate beefy dual- and triple-slot models. It’s all there, if you want to unbox and show off your PC build like the work of engineering art it truly is.

Of course, these files weren’t released from the kindness of Corsair’s corporate heart– they’re meant to be used with fancy pegboard desks the company also sells. Still to their credit, they did release the files under a CC4.0-Attribution-ShareAlike license. That means there’s nothing stopping an enterprising hacker from remixing this design for the ubiquitous SKÅDIS or any other perfboard should they so desire.

We’ve covered artful open-cases before here on Hackaday, but if you prefer to hide the expensive bits from dust and cats, this midcentury box might be more your style. If you’d rather no one know you own a computer at all, you can always do the exact opposite of this build, and hide everything inside the desk.


From Blog – Hackaday via this RSS feed

78
11

Telescopes are great tools for observing the heavens, or even surrounding landscapes if you have the right vantage point. You don’t have to be a professional to build one though; you can make all kinds of telescopes as an amateur, as this guide from the Springfield Telesfcope Makers demonstrates.

The guide is remarkably deep and rich; no surprise given that the Springfield Telescope Makers club dates back to the early 20th century. It starts out with the basics—how to select a telescope, and how to decide whether to make or buy your desired instrument. It also explains in good detail why you might want to start with a simple Newtonian reflector setup on Dobsonian mounts if you’re crafting your first telescope, in no small part because mirrors are so much easier to craft than lenses for the amateur. From there, the guide gets into the nitty gritty of mirror production, right down to grinding and polishing techniques, as well as how to test your optical components and assemble your final telescope.

It’s hard to imagine a better place to start than here as an amateur telescope builder. It’s a rich mine of experience and practical advice that should give you the best possible chance of success. You might also like to peruse some of the other telescope projects we’ve covered previously. And, if you succeed, you can always tell us of your tales on the tipsline!


From Blog – Hackaday via this RSS feed

79
7

Historically, efforts to create original games and tools, port over open source emulators, and explore a game console’s hardware and software have been generally lumped together under the banner of “homebrew.” While not the intended outcome, it’s often the case that exploring a console in this manner unlocks methods to run pirated games. For example, if a bug is found in the system’s firmware that enables a clever developer to run “Hello World”, you can bet that the next thing somebody tries to write is a loader that exploits that same bug to play a ripped commercial game.

But for those who are passionate about being able to develop software for their favorite game consoles, and the developers who create the libraries and toolchains that make that possible, the line between homebrew and piracy is a critical boundary. The general belief has always been that keeping piracy at arm’s length made it less likely that the homebrew community would draw the ire of the console manufacturers.

As such, homebrew libraries and tools are held to a particularly high standard. Homebrew can only thrive if developed transparently, and every effort must be taken to avoid tainting the code with proprietary information or code. Any deviation could be the justification a company like Nintendo or Sony needs to swoop in.

Unfortunately, there are fears that covenant has been broken in light of multiple allegations of impropriety against the developers of libogc, the C library used by nearly all homebrew software for the Wii and GameCube. From potential license violations to uncomfortable questions about the origins of the project, there’s mounting evidence that calls the viability of the library into question. Some of these allegations, if true, would effectively mean the distribution and use of the vast majority of community-developed software for both consoles is now illegal.

Homebrew Channel Blows the Whistle

For those unfamiliar, the Wii Homebrew Channel (HBC) is a front-end used to load homebrew games and programs on the Nintendo Wii, and is one of the very first things anyone who’s modded their console will install. It’s not an exaggeration to say that essentially anyone who’s run homebrew software on their Wii has done it through HBC.

But as of a few days ago, the GitHub repository for the project was archived, and lead developer Hector Martin added a long explanation to the top of its README that serves as an overview of the allegations being made against the team behind libogc.

Somewhat surprisingly, Martin starts by admitting that he’s believed libogc contained ill-gotten code since at least 2008. He accuses the developers of decompiling commercial games to get access to the C code, as well as copying from leaked documentation from the official Nintendo software development kit (SDK).

For many, that would have been enough to stop using the library altogether. In his defense, Martin claims that he and the other developers of the HBC didn’t realize the full extent to which libogc copied code from other sources. Had they realized, Martin says they would have launched an effort to create a new low-level library for the Wii.

But as the popularity of the Homebrew Channel increased, Martin and his team felt they had no choice but to reluctantly accept the murky situation with libogc for the good of the Wii homebrew scene, and left the issue alone. That is, until new information came to light.

Inspiration Versus Copying

The story then fast-forwards to the present day, and new claims from others in the community that large chunks of libogc were actually copied from the Real-Time Executive for Multiprocessor Systems (RTEMS) project — a real-time operating system that was originally designed for military applications but that these days finds itself used in a wide-range of embedded systems. Martin links to a GitHub repository maintained by a user known as derek57 that supposedly reversed the obfuscation done by the libogc developers to try and hide the fact they had merged in code from RTEMS.

Now, it should be pointed out that RTEMS is actually an open source project. As you might expect from a codebase that dates back to 1993, these days it includes several licenses that were inherited from bits of code added over the years. But the primary and preferred license is BSD 2-Clause, which Hackaday readers may know is a permissive license that gives other projects the right to copy and reuse the code more or less however they chose. All it asks in return is attribution, that is, for the redistributed code to retain the copyright notice which credits the original authors.

In other words, if the libogc developers did indeed copy code from RTEMS, all they had to do was properly credit the original authors. Instead, it’s alleged that they superficially refactored the code to make it appear different, presumably so they would not have to acknowledge where they sourced it from. Martin points to the following function as an example of RTEMS code being rewritten for libogc:

While this isolated function doesn’t necessarily represent the entirety of the story, it does seem hard to believe that the libogc implementation could be so similar to the RTEMS version by mere happenstance. Even if the code was not literally copy and pasted from RTEMS, it’s undeniable that it was used as direct inspiration.

libogc Developers Respond

At the time of this writing, there doesn’t appear to be an official response to the allegations raised by Martin and others in the community. But individual developers involved with libogc have attempted to explain their side of the story through social media, comments on GitHub issues, and personal blog posts.

The most detailed comes from Alberto Mardegan, a relatively new contributor to libogc. While the code in question was added before his time with the project, he directly addresses the claim that functions were lifted from RTEMS in a blog post from April 28th. While he defends the libogc developers against the accusations of outright code theft, his conclusions are not exactly a ringing endorsement for how the situation was handled:

In short, Mardegan admits that some of the code is so similar that it must have been at least inspired by reading the relevant functions from RTEMS, but that he believes this falls short of outright copyright infringement. As to why the libogc developers didn’t simply credit the RTEMS developers anyway, he theorizes that they may have wanted to avoid any association with a project originally developed for military use.

As for claims that libogc was based on stolen Nintendo code, the libogc developers seem to consider it irrelevant at this point. When presented with evidence that the library was built on proprietary code, Dave [WinterMute] Murphy, who maintains the devkitPro project that libogc is a component of, responded that “The official stance of the project is that we have no interest in litigating something that occurred 21 years ago”.

In posts to Mastodon, Murphy acknowledges that some of the code may have been produced by reverse engineering parts of the official Nintendo SDK, but then goes on to say that “There was no reading of source code or tools to turn assembly into C”.

From his comments, it’s clear that Murphy believes that the benefit of having libogc available to the community outweighs concerns over its origins. Further, he feels that enough time has passed since its introduction that the issue is now moot. In comparison, when other developers in the homebrew and emulator community have found themselves in similar situations, they’ve gone to great lengths to avoid tainting their projects with leaked materials.

Doing the Right Thing?

The Wii Homebrew Channel itself had not seen any significant updates in several years, so Martin archiving the project was somewhat performative to begin with. This would seem to track with his reputation — in addition to clashes with the libogc developers, Martin has also recently left Asahi Linux after a multi-bag-of-popcorn spat within the kernel development community that ended with Linus Torvalds declaring that “the problem is you”.

But that doesn’t mean there isn’t merit to some of his claims. At least part of the debate could be settled by simply acknowledging that RTEMS was an inspiration for libogc in the library’s code or documentation. The fact that the developers seem reluctant to make this concession in light of the evidence is troubling. If not an outright license violation, it’s at least a clear disregard for the courtesy and norms of the open source community.

As for how the leaked Nintendo SDK factors in, there probably isn’t enough evidence one way or another to ever determine what really happened. Martin says code was copied verbatim, the libogc team says it was reverse engineered.

The key takeaway here is that both parties agree that the leaked information existed, and that it played some part in the origins of the library. The debate therefore isn’t so much about if the leaked information was used, but how it was used. For some developers, that alone would be enough to pass on libogc and look for an alternative.

Of course, in the end, that’s the core of the problem. There is no alternative, and nearly 20 years after the Wii was released, there’s little chance of another group having the time or energy to create a new low-level C library for the system. Especially without good reason.

The reality is that whatever interaction there was with the Nintendo SDK happened decades ago, and if anyone was terribly concerned about it there would have been repercussions by now. By extension, it seems unlikely that any projects that rely on libogc will draw the attention of Nintendo’s legal department at this point.

In short, life will go on for those still creating and using homebrew on the Wii. But for those who develop and maintain open source code, consider this to be a cautionary tale — even if we can’t be completely sure of what’s fact or fiction in this case.


From Blog – Hackaday via this RSS feed

80
5

The Jye Tech DSO-150 is a capable compact scope that you can purchase as a kit. If you’re really feeling the DIY ethos, you can go even further, too, and kit your scope out with the latest open source firmware.

The Open-DSO-150 firmware is a complete rewrite from the ground up, and packs the scope with lots of neat features. You get one analog or three digital channels, and triggers are configurable for rising, falling, or both edges on all signals. There is also a voltmeter mode, serial data dump feature, and a signal statistics display for broader analysis.

For the full list of features, just head over to the GitHub page. If you’re planning to install it on your own DSO-150, you can build the firmware in the free STM32 version of Atollic trueSTUDIO.

If you’re interested in the Jye Tech DSO-150 as it comes from the factory, we’ve published our very own review, too. Meanwhile, if you’re cooking up your own scope hacks, don’t hesitate to let us know!

Thanks to [John] for the tip!


From Blog – Hackaday via this RSS feed

81
7

Brain-to-speech interfaces have been promising to help paralyzed individuals communicate for years. Unfortunately, many systems have had significant latency that has left them lacking somewhat in the practicality stakes.

A team of researchers across UC Berkeley and UC San Francisco has been working on the problem and made significant strides forward in capability. A new system developed by the team offers near-real-time speech—capturing brain signals and synthesizing intelligible audio faster than ever before.

New Capability

The aim of the work was to create more naturalistic speech using a brain implant and voice synthesizer. While this technology has been pursued previously, it faced serious issues around latency, with delays of around eight seconds to decode signals and produce an audible sentence. New techniques had to be developed to try and speed up the process to slash the delay between a user trying to “speak” and the hardware outputting the synthesized voice.

The implant developed by researchers is used to sample data from the speech sensorimotor cortex of the brain—the area that controls the mechanical hardware that makes speech: the face, vocal chords, and all the other associated body parts that help us vocalize. The implant captures signals via an electrode array surgically implanted into the brain itself. The data captured by the implant is then passed to an AI model which figures out how to turn that signal into the right audio output to create speech. “We are essentially intercepting signals where the thought is translated into articulation and in the middle of that motor control,” said Cheol Jun Cho, a Ph.D student at UC Berkeley. “So what we’re decoding is after a thought has happened, after we’ve decided what to say, after we’ve decided what words to use, and how to move our vocal-tract muscles.”

The AI model had to be trained to perform this role. This was achieved by having a subject, Ann, look at prompts and attempting to “speak ” the phrases. Ann has suffered from paralysis after a stroke which left her unable to speak. However, when she attempts to speak, relevant regions in her brain still lit up with activity, and sampling this enabled the AI to correlate certain brain activity to intended speech. Unfortunately, since Ann could no longer vocalize herself, there was no target audio for the AI to correlate the brain data with. Instead, researchers used a text-to-speech system to generate simulated target audio for the AI to match with the brain data during training. “We also used Ann’s pre-injury voice, so when we decode the output, it sounds more like her,” explains Cho. A recording of Ann speaking at her wedding provided source material to help personalize the speech synthesis to sound more like her original speaking voice.

To measure performance of the new system, the team compared the time it took the system to generate speech to the first indications of speech intent in Ann’s brain signals. “We can see relative to that intent signal, within one second, we are getting the first sound out,” said Gopala Anumanchipalli, one of the researchers involved in the study. “And the device can continuously decode speech, so Ann can keep speaking without interruption.” Crucially, too, this speedier method didn’t compromise accuracy—in this regard, it decoded just as well as previous slower systems.

Pictured is Ann using the system to speak in near-real-time. The system also features a video avatar. Credit: UC Berkeley

The decoding system works in a continuous fashion—rather than waiting for a whole sentence, it processes in small 80-millisecond chunks and synthesizes on the fly. The algorithms used to decode the signals were not dissimilar from those used by smart assistants like Siri and Alexa, Anumanchipalli explains. “Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming,” he says. “The result is more naturalistic, fluent speech synthesis.”

It was also key to determine whether the AI model

was genuinely communicating what Ann was trying to say. To investigate this, Ann was qsked to try and vocalize words outside the original training data set—things like the NATO phonetic alphabet, for example. “We wanted to see if we could generalize to the unseen words and really decode Ann’s patterns of speaking,” said Anumanchipalli. “We found that our model does this well, which shows that it is indeed learning the building blocks of sound or voice.”

For now, this is still groundbreaking research—it’s at the cutting edge of machine learning and brain-computer interfaces. Indeed, it’s the former that seems to be making a huge difference to the latter, with neural networks seemingly the perfect solution for decoding the minute details of what’s happening with our brainwaves. Still, it shows us just what could be possible down the line as the distance between us and our computers continues to get ever smaller.

Featured image: A researcher connects the brain implant to the supporting hardware of the voice synthesis system. Credit: UC Berkeley


From Blog – Hackaday via this RSS feed

82
8

Rear-view mirrors are important safety tools, but [Mike Kelly] observed that cyclists (himself included) faced hurdles to using them effectively. His solution? A helmet-mounted dual-mirror system he’s calling the Mantis Mirror that looks eminently DIY-able to any motivated hacker who enjoys cycling.

One mirror for upright body positions, the other for lower positions.

Carefully placed mirrors eliminate blind spots, but a cyclist’s position changes depending on how they are riding and this means mirrors aren’t a simple solution. Mirrors that are aligned just right when one is upright become useless once a cyclist bends down. On top of that, road vibrations have a habit of knocking even the most tightly-cinched mirror out of alignment.

[Mike]’s solution was to attach two small mirrors on a short extension, anchored to a cyclist’s helmet. The bottom mirror provides a solid rear view from an upright position, and the top mirror lets one see backward when in low positions.

[Mike] was delighted with his results, and got enough interest from others that he’s considering a crowdfunding campaign to turn it into a product. In the meantime, we’d love to hear about it if you decide to tinker up your own version.

You can learn all about the Mantis Mirror in the video below, and if you want to see the device itself a bit clearer, you can see that in some local news coverage.


From Blog – Hackaday via this RSS feed

83
14

One might be tempted to think that re-creating a film robot from the 1950s would be easy given all the tools and technology available to the modern hobbyist, but as [Mike Ogrinz]’s quest to re-create Robby the Robot shows us, there is a lot moving around inside that domed head, and requires careful and clever work.

The “dome gyros” are just one of the complex assemblies, improved over the original design with the addition of things like bearings.

Just as one example, topping Robby’s head is a mechanical assembly known as the dome gyros. It looks simple, but as the video (embedded below) shows, re-creating it involves a load of moving parts and looks like a fantastic amount of work has gone into it. At least bearings are inexpensive and common nowadays, and not having to meet film deadlines also means one can afford to design things in a way that allows for easier disassembly and maintenance.

Robby the Robot first appeared in the 1956 film Forbidden Planet and went on to appear in other movies and television programs. Robby went up for auction in 2017 and luckily [Mike] was able to take tons of reference photos. Combined with other enthusiasts’ efforts, his replica is shaping up nicely.

We’ve seen [Mike]’s work before when he shared his radioactive Night Blossoms which will glow for decades to come. His work on Robby looks amazing, and we can’t wait to see how it progresses.


From Blog – Hackaday via this RSS feed

84
4

Vintage hi-fi gear has a look and feel all its own. [ThunderOwl] happened to be playing in this space, turning a heavily-modified Technics stereo stack into an awesome neo-retro PC case. Meet the “TechnicsPC!”

This is good. We like this.

You have to hunt across BlueSky for the goodies, but it’s well worth it. The main build concerned throwing a PC into an old Technics receiver, along with a pair of LCD displays and a bunch of buttons for control. If the big screens weren’t enough of a tell that you’re looking at an anachronism, the USB ports just below the power switch will tip you off. A later addition saw a former Technics tuner module stripped out and refitted with card readers and a DVD/CD drive. Perhaps the most era-appropriate addition, though, is the scrolling LED display on top. Stuffed inside another tuner module, it’s a super 90s touch that somehow just works.

These days, off-the-shelf computers are so fancy and glowy that DIY casemodding has fallen away from the public consciousness. And yet, every so often, we see a magnificent build like this one that reminds us just how creative modders can really be. Video after the break.

“Live test”. All more or less as planned, as “cons” – it does not interrupt ongoing scroll cycle with new stuff, it puts new content info with next cycle, so, kinda “info delays”:

[image or embed]

— ThunderOwl (@thunderowl.one) 10 March 2025 at 07:39


From Blog – Hackaday via this RSS feed

85
5

Radiation-induced volumetric expansion (RIVE) is a concern for any concrete structures that are exposed to neutron flux and other types of radiation that affect crystalline structures within the aggregate. For research facilities and (commercial) nuclear reactors, RIVE is generally considered to be one of the factors that sets a limit on the lifespan of these structures through the cracking that occurs as for example quartz within the concrete undergoes temporary amorphization with a corresponding volume increase. The significance of RIVE within the context of a nuclear power plant is however still poorly studied.

A recent study by [Ippei Maruyama] et al. as published in the Journal of Nuclear Materials placed material samples in the LVR-15 research reactor in the Czech Republic to expose them to an equivalent neutron flux. What their results show is that at the neutron flux levels that are expected at the biological shield of a nuclear power plant, the healing effect from recrystallization is highly likely to outweigh the damaging effects of amorphization, ergo preventing RIVE damage.

This study follows earlier research on the topic at the University of Tokyo by [Kenta Murakami] et al., as well as by Chinese researchers, as in e.g. [Weiping Zhang] et al. in Nuclear Engineering and Technology. [Murayama] et al. recommend that for validation of these findings concrete samples from decommissioned nuclear plants are to be examined for signs of RIVE.

Heading image: SEM-EDS images of the pristine (left) and the irradiated (right) MC sample. (Credit: I. Murayama et al, 2022)


From Blog – Hackaday via this RSS feed

86
5

Typewriters aren’t really made anymore in any major quantity, since the computer kind of rained all over its inky parade. That’s not to say you can’t build one yourself though, as [Toast] did in a very creative fashion.

After being inspired by so many typewriters on YouTube, [Toast] decided they simply had to 3D print one of their own design. They decided to go in a unique direction, eschewing ink ribbons for carbon paper as the source of ink. To create a functional typewriter, they had to develop a typebar mechanism to imprint the paper, as well as a mechanism to move the paper along during typing. The weird thing is the letter selection—the typewriter doesn’t have a traditional keyboard at all. Instead, you select the letter of your choice from a rotary wheel, and then press the key vertically down into the paper. The reasoning isn’t obvious from the outset, but [Toast] explains why this came about after originally hitting a brick wall with a more traditional design.

If you’ve ever wanted to build a typewriter of your own, [Toast]’s example shows that you can have a lot of fun just by having a go and seeing where you end up. We’ve seen some other neat typewriter hacks over the years, too. Video after the break.

[Thanks to David Plass for the tip!]


From Blog – Hackaday via this RSS feed

87
4

Most of us learned to design circuits with schematics. But if you get to a certain level of complexity, schematics are a pain. Modern designers — especially for digital circuits — prefer to use some kind of hardware description language.

There are a few options to do similar things with PCB layout, including tscircuit. There’s a walk-through for using it to create an LED matrix and you can even try it out online, if you like. If you’re more of a visual learner, there’s also an introductory video you can watch below.

The example project imports a Pico microcontroller and some smart LEDs. They do appear graphically, but you don’t have to deal with them graphically. You write “code” to manage the connections. For example:

<trace from={".LED1 .GND"} to="net.GND" />

If that looks like HTML to you, you aren’t wrong. Once you have the schematic, you can do the same kind of thing to lay out the PCB using footprints. If you want to play with the actual design, you can load it in your browser and make changes. You’ll note that at the top right, there are buttons that let you view the schematic, the board, a 3D render of the board, a BOM, an assembly drawing, and several other types of output.

Will we use this? We don’t know. Years ago, designers resisted using HDLs for FPGAs, but the bigger FPGAs get, the fewer people want to deal with page after page of schematics. Maybe a better question is: Will you use this? Let us know in the comments.

This isn’t a new idea, of course. Time will tell which HDLs will survive and which will whither.


From Blog – Hackaday via this RSS feed

88
3

This week, Jonathan Bennett and Dan Lynch chat with Peter van Dijk about PowerDNS! Is the problem always DNS? How did PowerDNS start? And just how big can PowerDNS scale? Watch to find out!

https://github.com/PowerDNS/https://github.com/Habbiehttps://github.com/voorkant/https://7bits.nl/journal/

Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:

SpotifyRSS

Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License


From Blog – Hackaday via this RSS feed

89
5

Smart glasses are a complicated technology to work with. The smart part is usually straightforward enough—microprocessors and software are perfectly well understood and easy to integrate into even very compact packages. It’s the glasses part that often proves challenging—figuring out the right optics to create a workable visual interface that sits mere millimeters from the eye.

Dev Kennedy is no stranger to this world. He came to the 2024 Hackaday Supercon to give a talk and educate us all on photonics, optical stacks, and the technology at play in the world of smart glasses.

Good Optics

Dev’s talk begins with an apology. He notes that it’s not possible to convey an entire photonics and optics syllabus in a short presentation, which is understandable enough. His warning, regardless, is that his talk is as dense as possible to maximise the insight into the technical information he has to offer.

Things get heavy fast, as Dev dives into a breakdown of all the different basic technologies out there that can be used for building smart glasses. On one slide, he lays them all out with pros and cons across the board. There are a wide range of different illumination and projection technologies, everything from micro-OLED displays to fancy liquid crystal on silicon (LCOS) devices that are used to create an image with the aid of laser illumination. When you’re building smart glasses, though, that’s only half the story.

Dev explains the various optical technologies involved in AR and their strengths and weaknesses.

Once you’ve got something to make an image, you then need something to put it on in front of the eye. Dev goes on to talk about different techniques for doing this, from reflective waveguides to the amusingly-named birdbath combiners. Ultimately, you’re hunting for something that provides a clear and visible image to the user in all conditions, while still providing a great view of the world around them, too. This can be particularly challenging in high-brightness conditions, like walking around outdoors in daylight.

The talk also focuses on a particular bugbear for Dev—the fact that AR and VR aren’t treated as differently as they should be. “VR is a stack of pancakes,” says Dev. “Why is it a stack of pancakes? It’s because all of the PCBs, the optics, the emissions source for the light—is in front of the user’s nose.” Because VR is just about beaming images into the eye, with no regard for the outside world, it’s a little more straightforward. “It’s basically a stack of technology outward from the eye relief point to the back of the device.” Dev explains.

When it comes to AR, though, the solutions must be more complicated. “What’s different is AR is actually an archer,” says Dev, referring to the way such devices must fling light around. “What an archer does is it shoots light around the side of the arm, and it might have to bend it one way or another, up on the crossbar and spread it out through a waveguide, and at the very exist point… at the coupling out portion… the light has to make one more right turn… towards your eye.” Ultimately, the optics and display hardware involved tend to diverge a long way from what can be used in VR displays. “These technologies are fundamentally different,” says Dev. “It strains me to great extent that people kind of batch them into the same category.”

Snapchat’s fifth-generation Spectacles have some interesting optics, but they’re perhaps not quite market ready in Dev’s opinion.

The talk also steps away from raw hardware chat, and covers some of the devices on the market, and those that left it years ago. Dev makes casual mention of Google Glass, spawned all the way back in 2013, before also noting developments Microsoft made with Hololens over the year. As for the current state of play, Dev namechecks Project Orion from Meta, as well as the fifth-generation of Snapchat Spectacles.

He gives particular credit to Meta for their work on refining input modalities that work with the smart glasses interrface paradigm. Meanwhile, he notes Snapchat needs work on “comfort, weight, and looks,” given how bulky their current product is. Overall, with these products, there are problems to be overcome before they can really become mainstream tools for every day use. “The important part is the relatability of these devices,” Dev goes on to explain. “We don’t see that just yet, as a $25,000 device from Meta and something that is too thick to be socially acceptable from Snapchat.

Fundamentally, as Dev’s talk highlights, AR remains a technology still at a nascent stage of development. It’s worth remembering—it took decades to develop computers that could fit in our pockets (smartphones) or on our wrists (smartwatches). Expect smart glasses to actually go mainstream as soon as the technical and optical issues are worked out, and the software and interface solutions actually help people in day to day life.


From Blog – Hackaday via this RSS feed

90
10

One of the first things that an amateur radio operator is likely to do once receiving their license is grab a dual-band handheld and try to make contacts with a local repeater. After the initial contacts, though, many hams move on to more technically challenging aspects of the hobby. One of those being activating space-based repeaters instead of their terrestrial counterparts. [saveitforparts] takes a look at some more esoteric uses of these radio systems in his latest video.

There are plenty of satellite repeaters flying around the world that are actually legal for hams to use, with most being in low-Earth orbit and making quick passes at predictable times. But there are others, generally operated by the world’s militaries, that are in higher geostationary orbits which allows them to serve a specific area continually. With a specialized three-dimensional Yagi-Uda antenna on loan, [saveitforparts] listens in on some of these signals. Some of it is presumably encrypted military activity, but there’s also some pirate radio and state propaganda stations.

There are a few other types of radio repeaters operating out in space as well, and not all of them are in geostationary orbit. Turning the antenna to the north, [saveitforparts] finds a few Russian satellites in an orbit specifically designed to provide polar regions with a similar radio service. These sometimes will overlap with terrestrial radio like TV or air traffic control and happily repeat them at brief intervals.

[saveitforparts] has plenty of videos looking at other satellite communications, including grabbing images from Russian weather satellites, using leftover junk to grab weather data from geostationary orbit, and accessing the Internet via satellite with 80s-era technology.


From Blog – Hackaday via this RSS feed

91
9

As the Common Business Oriented Language, COBOL has a long and storied history. To this day it’s quite literally the financial bedrock for banks, businesses and financial institutions, running largely unnoticed by the world on mainframes and similar high-reliability computer systems. That said, as a domain-specific language targeting boring business things it doesn’t quite get the attention or hype as general purpose programming or scripting languages. Its main characteristic in the public eye appears be that it’s ‘boring’.

Despite this, COBOL is a very effective language for writing data transactions, report generating and related tasks. Due to its narrow focus on business applications, it gets one started with very little fuss, is highly self-documenting, while providing native support for decimal calculations, and a range of I/O access and database types, even with mere files. Since version 2002 COBOL underwent a number of modernizations, such as free-form code, object-oriented programming and more.

Without further ado, let’s fetch an open-source COBOL toolchain and run it through its paces with a light COBOL tutorial.

Spoiled For Choice

It used to be that if you wanted to tinker with COBOL, you pretty much had to either have a mainframe system with OS/360 or similar kicking around, or, starting in 1999, hurl yourself at setting up a mainframe system using the Hercules mainframe emulator. Things got a lot more hobbyist & student friendly in 2002 with the release of GnuCOBOL, formerly OpenCOBOL, which translates COBOL into C code before compiling it into a binary.

While serviceable, GnuCOBOL is not a compiler, and does not claim any level of standard adherence despite scoring quite high against the NIST test suite. Fortunately, The GNU Compiler Collection (GCC) just got updated with a brand-new COBOL frontend (gcobol) in the 15.1 release. The only negative is that for now it is Linux-only, but if your distribution of choice already has it in the repository, you can fetch it there easily. Same for Windows folk who have WSL set up, or who can use GnuCOBOL with MSYS2.

With either compiler installed, you are now ready to start writing COBOL. The best part of this is that we can completely skip talking about the Job Control Language (JCL), which is an eldritch horror that one would normally be exposed to on IBM OS/360 systems and kin. Instead we can just use GCC (or GnuCOBOL) any way we like, including calling it directly on the CLI, via a Makefile or integrated in an IDE if that’s your thing.

Hello COBOL

As is typical, we start with the ‘Hello World’ example as a first look at a COBOL application:

IDENTIFICATION DIVISION. PROGRAM-ID. hello-world. PROCEDURE DIVISION. DISPLAY "Hello, world!". STOP RUN.

Assuming we put this in a file called hello_world.cob, this can then be compiled with e.g. GnuCOBOL: cobc -x -free hello_world.cob.

The -x indicates that an executable binary is to be generated, and -free that the provided source uses free format code, meaning that we aren’t bound to specific column use or sequence numbers. We’re also free to use lowercase for all the verbs, but having it as uppercase can be easier to read.

From this small example we can see the most important elements, starting with the identification division with the program ID and optionally elements like the author name, etc. The program code is found in the procedure division, which here contains a single display verb that outputs the example string. Of note is the use of the period (.) as a statement terminator.

At the end of the application we indicate this with stop run., which terminates the application, even if called from a sub program.

Hello Data

As fun as a ‘hello world’ example is, it doesn’t give a lot of details about COBOL, other than that it’s quite succinct and uses plain English words rather than symbols. Things get more interesting when we start looking at the aspects which define this domain specific language, and which make it so relevant today.

Few languages support decimal (fixed point) calculations, for example. In this COBOL Basics project I captured a number of examples of this and related features. The main change is the addition of the data division following the identification division:

DATA DIVISION. WORKING-STORAGE SECTION. 01 A PIC 99V99 VALUE 10.11. 01 B PIC 99V99 VALUE 20.22. 01 C PIC 99V99 VALUE 00.00. 01 D PIC $ZZZZV99 VALUE 00.00. 01 ST PIC $*(5).99 VALUE 00.00. 01 CMP PIC S9(5)V99 USAGE COMP VALUE 04199.04. 01 NOW PIC 99/99/9(4) VALUE 04102034.

The data division is unsurprisingly where you define the data used by the program. All variables used are defined within this division, contained within the working-storage section. While seemingly overwhelming, it’s fairly easily explained, starting with the two digits in front of each variable name. This is the data level and is how COBOL structures data, with 01 being the highest (root) level, with up to 49 levels available to create hierarchical data.

This is followed by the variable name, up to 30 characters, and then the PICTURE (or PIC) clause. This specifies the type and size of an elementary data item. If we wish to define a decimal value, we can do so as two numeric characters (represented by 9) followed by an implied decimal point V, with two decimal numbers (99).  As shorthand we can use e.g. S9(5) to indicate a signed value with 5 numeric characters. There a few more special characters, such as an asterisk which replaces leading zeroes and Z for zero suppressing.

The value clause does what it says on the tin: it assigns the value defined following it to the variable. There is however a gotcha here, as can be seen with the NOW variable that gets a value assigned, but due to the PIC format is turned into a formatted date (04/10/2034).

Within the procedure division these variables are subjected to addition (ADD A TO B GIVING C.), subtraction with rounding (SUBTRACT A FROM B GIVING C ROUNDED.), multiplication (MULTIPLY A BY CMP.) and division (DIVIDE CMP BY 20 GIVING ST.).

Finally, there are a few different internal formats, as defined by USAGE: these are computational (COMP) and display (the default). Here COMP stores the data as binary, with a variable number of bytes occupied, somewhat similar to char, short and int types in C. These internal formats are mostly useful to save space and to speed up calculations.

Hello Business

In a previous article I went over the reasons why a domain specific language like COBOL cannot be realistically replaced by a general language. In that same article I discussed the Hello Business project that I had written in COBOL as a way to gain some familiarity with the language. That particular project should be somewhat easy to follow with the information provided so far. New are mostly file I/O, loops, the use of perform and of course the Report Writer, which is probably best understood by reading the IBM Report Writer Programmer’s Manual (PDF).

Going over the entire code line by line would take a whole article by itself, so I will leave it as an exercise for the reader unless there is somehow a strong demand by our esteemed readers for additional COBOL tutorial articles.

Suffice it to say that there is a lot more functionality in COBOL beyond these basics. The IBM ILE COBOL reference (PDF), the IBM Mainframer COBOL tutorial, the Wikipedia entry and others give a pretty good overview of many of these features, which includes object-oriented COBOL, database access, heap allocation, interaction with other languages and so on.

Despite being only a novice COBOL programmer at this point, I have found this DSL to be very easy to pick up once I understood some of the oddities about the syntax, such as the use of data levels and the PIC formats. It is my hope that with this article I was able to share some of the knowledge and experiences I gained over the past weeks during my COBOL crash course, and maybe inspire others to also give it a shot. Let us know if you do!


From Blog – Hackaday via this RSS feed

92
15

As any Linux chat room or forum will tell you, the most powerful tool to any Linux user is a terminal emulator. Just about every program under the sun has a command line alternative, be it CAD, note taking, or web browsing. Likewise, the digital audio workstation (DAW) is the single most important tool to anyone making music. Therefore, [unspeaker] decided the two should, at last, be combined with a terminal based DAW called Tek.

Tek functions similarly to other DAWs, albeit with keyboard only input. For anyone used to working in Vim or Emacs (we ask you keep the inevitable text editor comment war civil), Tek will be very intuitive. Currently, the feature set is fairly spartan, but plans exist to add keybinds for save/load, help, and more. The program features several modes including a multi-track sequencer/sampler called the “arranger.” Each track in the arranger is color coded with a gradient of colors generated randomly at start for a fresh look every time.

Modern audio workflows often span across numerous programs, and Tek was built with this in mind. It can take MIDI input and output from the JACK Audio Connection Kit, and plans also exist to create a plugin server so Tek could be used with other DAWs like Ardor or Zrythm. Moreover, being a terminal program opens possibilities for complicated shell scripting and other such Linux-fu.

Maybe a terminal DAW is not your thing, so make sure to check out this physical one instead!


From Blog – Hackaday via this RSS feed

93
10

Recently [Glen Akins] reported on Bluesky that the Zigbee-based sensor he had made for his garden’s rear gate was still going strong after a Summer and Winter on the original 2450 lithium coin cell. The construction plans and design for the unit are detailed in a blog post. At the core is the MS88SF2 SoM by Minew, which features a Nordic Semiconductor nRF52840 SoC that provides the Zigbee RF feature as well as the usual MCU shenanigans.

Previously [Glen] had created a similar system that featured buttons to turn the garden lights on or off, as nobody likes stumbling blindly through a dark garden after returning home. Rather than having to fumble around for a button, the system should detect when said rear gate is opened. This would send a notification to [Glen]’s phone as well as activate the garden lights if it’s dark outside.

Although using a reed relay switch seemed like an obvious solution to replace the buttons, holding it closed turned out to require too much power. After looking at a few commercial examples, he settled for a Hall effect sensor solution with the Ti DRV5032FB in a TO-92 package.

Whereas the average person would just have put in a PIR sensor-based solution, this Zigbee solution does come with a lot more smart home creds, and does not require fumbling around with a smartphone or yelling at a voice assistant to turn the garden lights on.


From Blog – Hackaday via this RSS feed

94
7

There are a lot of distractions in daily life, especially with all the different forms of technology and their accompanying algorithms vying for our attention in the modern world. [mar1ash] makes the same observation about our shared experiences fighting to stay sane with all these push notifications and alerts, and wanted something a little simpler that can just tell time and perhaps a few other things. Enter the time brick.

The time brick is a simple way of keeping track of the most basic of things in the real world: time and weather. The device has no buttons and only a small OLED display. Based on an ESP-01 module and housed in a LEGO-like enclosure, the USB-powered clock sits quietly by a bed or computer with no need for any user interaction at all. It gets its information over a Wi-Fi connection configured in the code running on the device, and cycles through not only time, date, and weather but also a series of pre-programmed quotes of a surreal nature, since part of [mar1ash]’s goals for this project was to do something just a little bit outside the norm.

There are a few other quirks in this tiny device as well, including animations for the weather display, a “night mode” that’s automatically activated to account for low-light conditions, and the ability to easily handle WiFi drops and other errors without crashing. All of the project’s code is also available on its GitHub page. As far as design goes, it’s an excellent demonstration that successful projects have to avoid feature creep, and that doing one thing well is often a better design philosophy than adding needless complications.


From Blog – Hackaday via this RSS feed

95
11

The future of healthy indoor plants, courtesy of AI. (Credit: [Liam])The future of healthy indoor plants, courtesy of AI. (Credit: [Liam])Like so many of us, [Liam] has a big problem. Whether it’s the curse of Brown Thumbs or something else, those darn houseplants just keep dying despite guides always telling you how incredibly easy it is to keep them from wilting with a modicum of care each day, even without opting for succulents or cactuses. In a fit of despair [Liam] decided to pin his hopes on what we have come to accept as the Savior of Humankind, namely ‘AI’, which can stand for a lot of things, but it’s definitely really smart and can even generate pretty pictures, which is something that the average human can not. Hence it’s time to let an LLM do all the smart plant caring stuff with ‘PlantMom’.

Since LLMs (so far) don’t come with physical appendages by default, some hardware had to be plugged together to measure parameters like light, temperature and soil moisture. Add to this a grow light & a water pump and all that remained was to tell the LMM using an extensive prompt (containing Python code) what it should do (keep the plant alive) and what responses (Python methods) are available. All that was left now was to let the ‘AI’ (Google’s Gemma 3) handle it.

To say that this resulted in a dramatic failure along with what reads like an emotional breakdown (on the side of the LLM) would be an understatement. The LLM insisted on turning the grow light on when it should be off and had the most erratic watering responses imaginable based on absolutely incorrect interpretations of the ADC data (flipping dry vs wet). After this episode the poor chili plant’s soil was absolutely saturated and is still trying to dry out, while the ongoing LLM experiment (with empty water tank) has the grow light blasting more often than a weed farm.

So far it seems like that the humble state machine’s job is still safe from being taken over by ‘AI’, and not even brown thumb folk can kill plants this efficiently.


From Blog – Hackaday via this RSS feed

96
6

A quadrature encoder provides a way to let hardware read movement (and direction) of a shaft, and they can be simple, effective, and inexpensive devices. But [Paulo Marques] observed that when it comes to reading motor speeds with them, what works best at high speeds doesn’t work at low speeds, and vice versa. His solution? PicoEncoder is a library providing a lightweight and robust method of using the Programmable I/O (PIO) hardware on the RP2040 to get better results, even (or especially) from cheap encoders, and do it efficiently.

The results of the sub-step method (blue) resemble a low-pass filter, but is delivered with no delay or CPU burden.

The output of a quadrature encoder is typically two square waves that are out of phase with one another. This data says whether a shaft is moving, and in what direction. When used to measure something like a motor shaft, one can also estimate rotation speed. Count how many steps come from the encoder over a period of time, and use that as the basis to calculate something like revolutions per minute.

[Paulo] points out that one issue with this basic method is that the quality depends a lot on how much data one has to work with. But the slower a motor turns, the less data one gets. To work around this, one can use a different calculation optimized for low speeds, but there’s really no single solution that handles high and low speeds well.

Another issue is that readings at the “edges” of step transitions can have a lot of noise. This can be ignored and assumed to average out, but it’s a source of inaccuracy that gets worse at slower speeds. Finally, while an ideal encoder has individual phases that are exactly 50% duty cycle and exactly 90 degrees out of phase with one another. This is almost never actually the case with cheaper encoders. Again, a source of inaccuracy.

[Paulo]’s solution was to roll his own method with the RP2040’s PIO, using a hybrid approach to effect a “sub-step” quadrature encoder. Compared to simple step counting, PicoEncoder more carefully tracks transitions to avoid problems with noise, and even accounts for phase size differences present in a particular encoder. The result is a much more accurate calculation of motor speed and position without any delays. Most of the work is done by the PIO of the RP2040, which does the low-level work of counting steps and tracking transitions without any CPU time involved. Try it out the next time you need to read a quadrature encoder for a motor!

The PIO is one of the more interesting pieces of functionality in the RP2040 and it’s great to see it used in a such a clever way. As our own Elliot Williams put it when he evaluated the RP2040, the PIO promises never having to bit-bang a solution again.


From Blog – Hackaday via this RSS feed

97
6

On a Commodore 64, the computer is normally connected to a monitor with one composite video cable and to an audio device with a second, identical (although uniquely colored) cable. The signals passed through these cables are analog, each generated by a dedicated chip on the computer. Many C64 users may have accidentally swapped these cables when first setting up their machines, but [Matthias] wondered if this could be done purposefully — generating video with the audio hardware and vice versa.

Getting an audio signal from the video hardware on the Commodore is simple enough. The chips here operate at well over the needed frequency for even the best audio equipment, so it’s a relatively straightforward matter of generating an appropriate output wave. The audio hardware, on the other hand, is much less performative by comparison. The only component here capable of generating a fast enough signal to be understood by display hardware of the time is actually the volume register, although due to a filter on the chip the output is always going to be a bit blurred. But this setup is good enough to generate large text and some other features as well.

There are a few other constraints here as well, namely that loading the demos that [Matthias] has written takes so long that the audio can’t be paused while this happens and has to be bit-banged the entire time. It’s an in-depth project that shows mastery of the retro hardware, and for some other C64 demos take a look at this one which is written in just 256 bytes.

Thanks to [Jan] for the tip!


From Blog – Hackaday via this RSS feed

98
10

It’s not unusual for redundant satellites, rocket stages, or other spacecraft to re-enter the earth’s atmosphere. Usually they pass unnoticed or generate a spectacular light show, and very rarely a few pieces make it to the surface of the planet. Coming up though is something entirely different, a re-entry of a redundant craft in which the object in question might make it to the ground intact. To find out more about the story we have to travel back to the early 1970s, and Kosmos-482. It was a failed Soviet Venera mission, and since its lander was heavily over-engineered to survive entry into the Venusian atmosphere there’s a fascinating prospect that it might survive Earth re-entry.

A model of the Venera 7 probe, launched in 1970.This model of the earlier Venera 7 probe shows the heavy protection to survive entry into the Venusian atmosphere. Emerezhko, CC BY-SA 4.0.

At the time of writing the re-entry is expected to happen on the 10th of May, but as yet due to its shallow re-entry angle it is difficult to predict where it might land. It is thought to be about a metre across and to weigh just under 500 kilograms, and its speed upon landing is projected to be between 60 and 80 metres per second. Should it hit land rather than water then, its remains are thought to present an immediate hazard only in its direct path.

Were it to be recovered it would be a fascinating artifact of the Space Race, and once the inevitable question of its ownership was resolved — do marine salvage laws apply in space? –we’d expect it to become a world class museum exhibit. If that happens, we look forward to bringing you our report if possible.

This craft isn’t the only surviving relic of the Space Race out there, though it may be the only one we have a chance of seeing up-close. Some of the craft from that era are even still alive.

Header: Moini, CC0.


From Blog – Hackaday via this RSS feed

99
15

If you’ve only been around for the Internet age, you may not realize that Hackaday is the successor of electronics magazines. In their heyday, magazines like Popular Electronics, Radio Electronics, and Elementary Electronics brought us projects to build. Hacks, if you will. Just like Hackaday, not all readers are at the same skill level. So you’d see some hat with a blinking light on it, followed by some super-advanced project like a TV typewriter or a computer. Or a picture phone.

In 1982, Radio Electronics, a major magazine of the day, showed plans for building a picture phone. All you needed was a closed-circuit TV camera, a TV, a telephone, and about two shoeboxes crammed full of parts.

Like many picture phones of its day, it was stretching the definition a little. It actually used ham radio-style slow scan TV (SSTV) to send a frame of video about once every eight seconds. That’s not backwards. The frame rate was 0.125 Hz. And while the resulting 128 x 256 image would seem crude today, this was amazing high tech for 1982.

Slow Scan for the Win

Hams had been playing with SSTV for a long time. Early experiments used high-persistence CRTs, so you’d see the image for as long as the phosphor kept glowing. You also had to sit still for the entire eight seconds to send the picture.

It didn’t take long for hams to take advantage of modern circuits to capture the slow input and convert it to a normal TV signal for as long as you wanted, and that’s what this box does as well. Early “scan converters” used video storage tubes that were rejects (because a perfect new one might have cost $50,000). However, cheap digital memory quickly replaced these storage tubes, making SSTV more practical and affordable.

One of Mitsubishi’s Picture Phones

Still, it never really caught on for telephone networks. A few years later, a few commercial products offered similar tech. Atari made a phone that was bought up by Mitsubishi and sold as the Luna, for example, around 1986. Mitsubishi, Sony, and others tried, unsuccessfully, to get the market to accept these slow picture phones. Between the cost of making a call and a minimum of $400 to buy one, though, it was a hard sell.

You might think this sounds like a weekend project with a Pi-Cam, and you are probably right if you did it now. But in 1982, the amount of work it took to make this work was significant. It helped that it used MM5280 dynamic RAM chips, which held a whopping 4,096 bits (not bytes) of memory. The project needed 16 of the chips, which, at the time, were about $5 each. Remember that $80 in those days was a lot more than $80 today, and you had to buy the rest of the parts, the camera (the article estimates that’s $150, alone), and so on. This wasn’t a poor high school student project.

Robot Kits

You could buy entire kits or just key parts, which was a common thing for magazines to do in those days. The kits came from Robot Research, which was known for making SSTV equipment for hams, so it makes sense that they knew how to do this. The author mentions that “this project is not for beginners.” He explains there are nearly 100 ICs on a “tightly-packed double-sided PC board.”

The device had two primary inputs: fast scan from the camera and slow scan from the phone line. Both could be digitized and stored in the memory array. The memory can also output fast scan TV for the monitor or slow scan for the phone line. Obviously, the system was half duplex. If you were sending a picture, you wouldn’t expect to receive a picture at the same time.

This is just the main board!

The input conversion is done with comparators for speed. Luckily, the conversion is only four bits of monochrome, so you only need 16 (IC73-80) to get the job done. The memory speed was also a concern. Each memory chip’s enable line activated while the previous chip’s was half way through with a cycle.

Since there is no microcontroller, the design includes plenty of gates, op amps, bipolar transistors, and the like. The adjacent picture shows just the device’s main board!

Lots of Parts

If you want to dig into the details, you’ll also want to look at part 2. There’s more theory of operation there and the parts list. The article notes that you could record the tones to a cassette tape for later playback, but that you’d “probably need a device from your local phone company to couple the Picture Phone to their lines.” Ah, the days of the DAA.

They even noted in part 2 that connecting a home-built Picture Phone directly to the phone lines was illegal, which was true at the time. Part 3 talks even more about the phone interface (and, that same issue has a very cool roundup of all the computers you could buy in 1982, ranging from $100 to $6,000). Part 4 was all about alignment and yet more about the phone interface.

Alignment shouldn’t have been too hard. The highest tone on the phone line was 2,300 Hz. While there are many SSTV standards today for color images, this old-fashioned scheme was simple: 2,300 Hz for white and 1,500 Hz for black. A 1,200 Hz tone provided sync signals. Interestingly, sharp jumps in color could create artifacts, so the converters use a gray code to minimize unnecessary sharp jumps in value.

The Phone Book

It wouldn’t make sense to make only one of these, so we wonder how many pairs were built. The magazine did ask people to report if they had one and intended to publish a picture phone directory. We don’t know if that ever happened, but given what a long-distance phone call cost in 1982, we imagine that idea didn’t catch on.

The video phone was long a dream, and we still don’t have exactly what people imagined. We would really like to replicate this picture phone on a PC using GNU Radio, for example.


From Blog – Hackaday via this RSS feed

100
13

PCBs of two continuous glucose monitors

Continuous glucose meters (CGMs) aren’t just widgets for the wellness crowd. For many, CGMs are real-time feedback machines for the body, offering glucose trendlines that help people rethink how they eat. They allow diabetics to continue their daily life without stabbing their fingertips several times a day, in the most inconvenient places. This video by [Becky Stern] is all about comparing two of the most popular continuous glucose monitors (CGMs): the Abbott Libre 3 and the Dexcom G7.

Both the Libre 3 and the G7 come with spring-loaded applicators and stick to the upper arm. At first glance they seem similar, but the differences run deep. The Libre 3 is the minimalist of both: two plastic discs sandwiching the electronics. The G7, in contrast, features an over-molded shell that suggests a higher production cost, and perhaps, greater robustness. The G7 needs a button push to engage, which users describe as slightly clumsy compared to the Libre’s simpler poke-and-go design. The nuance: G7’s ten-day lifespan means more waste than the fourteen-day Libre, yet the former allows for longer submersion in water, if that’s your passion.

While these devices are primarily intended for people with diabetes, they’ve quietly been adopted by a growing tribe of biohackers and curious minds who are eager to explore their own metabolic quirks. In February, we featured a dissection of the Stelo CGM, cracking open its secrets layer by layer.


From Blog – Hackaday via this RSS feed

view more: ‹ prev next ›

Hackaday

316 readers
48 users here now

Fresh hacks every day

founded 9 months ago
MODERATORS