1401
69

Oct 4 (Reuters) - A U.S. appeals court on Wednesday rejected Google's bid to stop Texas and a group of other states from moving their antitrust lawsuit against the Alphabet (GOOGL.O) unit from New York federal court to Texas.

1402
28
submitted 1 year ago by zephyreks@lemmy.ml to c/technology@lemmy.ml
1403
62
submitted 1 year ago by floofloof@lemmy.ca to c/technology@lemmy.ml
1404
10
submitted 1 year ago by Pluto@hexbear.net to c/technology@lemmy.ml

cross-posted from: https://hexbear.net/post/778297

Another tech review.

Honestly, I'm in a "retail therapy" sort-of mood.

Your thoughts?

Video duration: 9:28

1405
-1
submitted 1 year ago by Blaed@lemmy.world to c/technology@lemmy.ml

cross-posted from: https://lemmy.world/post/6399678

🤖 Happy FOSAI Friday! 🚀

Friday, October 6, 2023

HyperTech News Report #0003

Hello Everyone!

This week highlights a wave of new papers and frameworks that expand upon LLM functionalities. With a tsunami of applications on the horizon I foresee a bedrock of tools to preceed. I'm not sure what kits and processes will end up part of this bedrock, but I hope some of these methods end up interesting or helpful to your workflow!

Table of Contents

Community Changelog

Image of the Week

This image of the week comes from one of my own projects! I hope you don't mind me sharing.. I was really happy with this result. This was generated from an SDXL model I trained and host on Replicate. I use an mock ensemble approach to generate various game assets for an experimental roguelike I'm making with a colleague.

My current method is not at all efficient, but I have fun. Right now, I have three SDXL models I interact with, each generating art I can use for my project. Andraxus takes care of wallpapers and in-game levels (this image you're seeing here), his in-game companion Biazera imagines characters and entities of this world, while Cerephelo tinkers and toils over the machinations within - crafting items, loot, powerups, etc.

I've been hesitant self-promoting here. But if there's genuine interest in this project I would be more than happy sharing more details. It's still in pre-alpha development, but there were plans releasing all of the models we use as open-source (obviously). We're still working on the engine though. Let me know if you want to see more on this project.


News


  1. Arxiv Publications Workflow: A new workflow has been introduced that allows users to scrape search topics from Arxiv, converting the results into markdown (MD) format. This makes it easier to digest and understand topics from Arxiv published content. The tool, available on GitHub, is particularly useful for those who wish to delve deeper into research papers and run their own research processes.

  2. Texting LLMs from Your Phone: A guide has been shared that enables users to communicate with their personal assistants via simple text messages. The process involves setting up a Twilio account, purchasing and registering a phone number, and then integrating it with the Replicate platform. The code, available on GitHub, makes it possible to send and receive messages from LLMs directly on one's phone.

  3. Microsoft's AutoGen: Microsoft has released AutoGen, a tool designed to aid in the creation of autonomous LLM agents. Compatible with ChatGPT models, AutoGen facilitates the development of LLM applications using multiple agents that can converse with each other to solve tasks. The framework is customizable and allows for seamless human participation. More details can be found on GitHub.

  4. Promptbench and ACE Framework: Promptbench is a new project focused on the evaluation and benchmarking of models. Stemming from the DyVal paper, it aims to provide reliable insights into model performance. On the other hand, the ACE Framework, designed for autonomous cognitive entities, offers a unique approach to agent tooling. While still in its early stages, it promises to bring about innovative implementations in the realms of personal assistants, game world NPCs, autonomous employees, and embodied robots.

  5. Research Highlights: Several papers have been published that delve into the intricacies of LLMs. One paper introduces a method to enhance the zero-shot reasoning abilities of LLMs, while another, titled DyVal, proposes a dynamic evaluation protocol for LLMs. Additionally, the concept of Low-Rank Adapters (LoRA) ensembles for LLM fine-tuning has been explored, emphasizing the potential of using one model and dynamically swapping the fine-tuned QLoRA adapters.


Tools & Frameworks


Keep Up w/ Arxiv Publications

Due to a drastic change in personal and work schedules, I've had to shift how I research and develop posts and projects for you guys. That being said, I found this workflow from the same author of the ACE Framework particularly helpful. It scrapes a search topic from Arxiv and returns a massive XML that is converted to markdown (MD) to then be used as an injectable context report for a LLM of your choosing (to further break down and understand topics) or as a well of information for the classic CTRL + F search. But at this point, info is aggregated (and human readable) from Arxiv published content.

After reading abstractions you can further drill into each paper and dissect / run your own research processes as you see fit. There is definitely more room for automation and organization here I'm sure, but this has been a big resource for me lately so I wanted to proliferate it for others who might find it helpful too.

Text LLMs from Your Phone

I had an itch to make my personal assistants more accessible - so I started investigating ways I could simply text them from my iPhone (via simple sms). There are many other ways I could've done this, but texting has been something I always like to default to in communications. So, I found this cool guide that uses infra I already prefer (Replicate) and has a bonus LangChain integration - which opens up the door to a ton of other opportunities down the line.

This tutorial was pretty straightforward - but to be honest, making the Twilio account, buying a phone number (then registering it) took the longest. The code itself takes less than 10 minutes to get up and running with ngrok. Super simple and straightforward there. The Twilio process? Not so much.. but it was worth the pain!

I am still waiting on my phone number to be verified (so that the Replicate inference endpoint can actually send SMS back to me) but I ended the night successfully texting the server on my local PC. It was wild texting the Ahsoka example from my phone and seeing the POST response return (even though it didn't go through SMS I could still see the server successfully receive my incoming message/prompt). I think there's a lot of fun to be had giving casual phone numbers and personalities to assistants like this. Especially if you want to LangChain some functions beyond just the conversation. If there's more interest on this topic, I can share how my assistant evolves once it gets full access to return SMS. I am designing this to streamline my personal life, and if it proves to be useful I will absolutely release the project as open-source.

AutoGen

With Agents on the rise, tools and automation pipelines to build them have become increasingly more important to consider. It seems like Microsoft is well aware of this, and thus released AutoGen, a tool to help enable this automation tooling and creation of autonomous LLM agents. AutoGen is compatible with ChatGPT models and is being kitted for local LLMs as we speak.

AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.

Promptbench

I recently found promptbench - a project that seems to have stemmed from the DyVal paper (shared below). I for one appreciate some of the new tools that are releasing focused around the evaluation and benchmarking of models. I hope we continue to see more evals, benchmarks, and projects that return us insights we can rely upon.

ACE Framework

A new framework has been proposed and designed for autonomous cognitive entities. This appears similar to agents and their style of tooling, but with a different architecture approach? I don't believe implementation of this is ready, but it may be soon and something to keep an eye on.

There are many possible implementations of the ACE Framework. Rather than detail every possible permutation, here is a list of categories that we perceive as likely and viable.

Personal Assistant and/or Companion

  • This is a self-contained version of ACE that is intended to interact with one user.
  • Think of Cortana from HALO, Samantha from HER, or Joi from Blade Runner 2049. (yes, we recognize these are all sexualized female avatars)
  • The idea would be to create something that is effectively a personal Executive Assistant that is able to coordinate, plan, research, and solve problems for you. This could be deployed on mobile, smart home devices, laptops, or web sites.

Game World NPC's

  • This is a kind of game character that has their own personality, motivations, agenda, and objectives. Furthermore, they would have their own unique memories.
  • This can give NPCs a much more realistic ability to pursue their own objectives, which should make game experiences much more dynamic and unpredictable, thus raising novelty. These can be adapted to 2D or 3D game engines such as PyGame, Unity, or Unreal.

Autonomous Employee

  • This is a version of the ACE that is meant to carry out meaningful and productive work inside a corporation.
  • Whether this is a digital CSR or backoffice worker depends on the deployment.
  • It could also be a "digital team member" that primarily interacts via Discord, Slack, or Microsoft Teams.

Embodied Robot

The ACE Framework is ideal to create self-contained, autonomous machines. Whether they are domestic aid robots or something like WALL-E


Papers


Agent Instructs Large Language Models to be General Zero-Shot Reasoners

We introduce a method to improve the zero-shot reasoning abilities of large language models on general language understanding tasks. Specifically, we build an autonomous agent to instruct the reasoning process of large language models. We show this approach further unleashes the zero-shot reasoning abilities of large language models to more tasks. We study the performance of our method on a wide set of datasets spanning generation, classification, and reasoning. We show that our method generalizes to most tasks and obtains state-of-the-art zero-shot performance on 20 of the 29 datasets that we evaluate. For instance, our method boosts the performance of state-of-the-art large language models by a large margin, including Vicuna-13b (13.3%), Llama-2-70b-chat (23.2%), and GPT-3.5 Turbo (17.0%). Compared to zero-shot chain of thought, our improvement in reasoning is striking, with an average increase of 10.5%. With our method, Llama-2-70b-chat outperforms zero-shot GPT-3.5 Turbo by 10.2%.

DyVal: Graph-informed Dynamic Evaluation of Large Language Models

Large language models (LLMs) have achieved remarkable performance in various evaluation benchmarks. However, concerns about their performance are raised on potential data contamination in their considerable volume of training corpus. Moreover, the static nature and fixed complexity of current benchmarks may inadequately gauge the advancing capabilities of LLMs. In this paper, we introduce DyVal, a novel, general, and flexible evaluation protocol for dynamic evaluation of LLMs. Based on our proposed dynamic evaluation framework, we build graph-informed DyVal by leveraging the structural advantage of directed acyclic graphs to dynamically generate evaluation samples with controllable complexities. DyVal generates challenging evaluation sets on reasoning tasks including mathematics, logical reasoning, and algorithm problems. We evaluate various LLMs ranging from Flan-T5-large to ChatGPT and GPT4. Experiments demonstrate that LLMs perform worse in DyVal-generated evaluation samples with different complexities, emphasizing the significance of dynamic evaluation. We also analyze the failure cases and results of different prompting methods. Moreover, DyVal-generated samples are not only evaluation sets, but also helpful data for fine-tuning to improve the performance of LLMs on existing benchmarks. We hope that DyVal can shed light on the future evaluation research of LLMs.

LoRA ensembles for large language model fine-tuning

Finetuned LLMs often exhibit poor uncertainty quantification, manifesting as overconfidence, poor calibration, and unreliable prediction results on test data or out-of-distribution samples. One approach commonly used in vision for alleviating this issue is a deep ensemble, which constructs an ensemble by training the same model multiple times using different random initializations. However, there is a huge challenge to ensembling LLMs: the most effective LLMs are very, very large. Keeping a single LLM in memory is already challenging enough: keeping an ensemble of e.g. 5 LLMs in memory is impossible in many settings. To address these issues, we propose an ensemble approach using Low-Rank Adapters (LoRA), a parameter-efficient fine-tuning technique. Critically, these low-rank adapters represent a very small number of parameters, orders of magnitude less than the underlying pre-trained model. Thus, it is possible to construct large ensembles of LoRA adapters with almost the same computational overhead as using the original model. We find that LoRA ensembles, applied on its own or on top of pre-existing regularization techniques, gives consistent improvements in predictive accuracy and uncertainty quantification.

There is something to be discovered between LoRA, QLoRA, and ensemble/MoE designs. I am digging into this niche because of an interesting bit I heard from sentdex (if you want to skip to the part I'm talking about, go to 13:58). Around 15:00 minute mark he brings up QLoRA adapters (nothing new) but his approach was interesting.

He eventually shares he is working on a QLoRA ensemble approach with skunkworks (presumably Boeing skunkworks). This confirmed my suspicion. Better yet - he shared his thoughts on how all of this could be done. Watch and support his video for more insights, but the idea boils down to using one model and dynamically swapping the fine-tuned QLoRA adapters. I think this is a highly efficient and unapplied approach. Especially in that MoE and ensemble realm of design. If you're reading this and understood anything I said - get to building! This is a seriously interesting idea that could yield positive results. I will share my findings when I find the time to dig into this more.


Author's Note

This post was authored by the moderator of !fosai@lemmy.world - Blaed. I make games, produce music, write about tech, and develop free open-source artificial intelligence (FOSAI) for fun. I do most of this through a company called HyperionTechnologies a.k.a. HyperTech or HYPERION - a sci-fi company.

Thanks for Reading!

This post was written by a human. For other humans. About machines. Who work for humans for other machines. At least for now... if you found anything about this post interesting, consider subscribing to !fosai@lemmy.world where you can join us on the journey into the great unknown!

Until next time!

Blaed

1406
7
submitted 1 year ago by Pluto@hexbear.net to c/technology@lemmy.ml
1407
350
1408
185
1409
-18
1410
70
submitted 1 year ago* (last edited 1 year ago) by ram@bookwormstory.social to c/technology@lemmy.ml

ghostarchive
context^[^^ghostarchive^^]^

An opinion piece recently appeared stating that Google “just flat out deletes queries and replaces them with ones that monetize better.” We don’t. The piece contains serious inaccuracies about how Google Search works. The organic (IE: non-sponsored) results you see in Search are not affected by our ads systems.

In particular, the piece seems to misunderstand how keyword matching is related to showing relevant ads on Google Search.

Ad keyword matching is a long-standing and well-known process that is designed to connect people to relevant ads. Learn more here:
^[^^ghostarchive^^]^ https://support.google.com/google-ads/answer/7478529

A separate process, which has nothing to do with ads, is used to match organic results to a query, as explained here:
^[^^archive.org^^]^ https://www.google.com/search/howsearchworks/how-search-works/ranking-results/
It’s no secret that Google Search looks beyond the specific words in a query to better understand their meaning, in order to show relevant organic results. This is a helpful process that we’ve written about many times:

^[^^archive.org^^]^ https://www.google.com/search/howsearchworks/how-search-works/ranking-results/
^[^^archive.org^^]^ https://blog.google/products/search/search-language-understanding-bert/
^[^^archive.org^^]^ https://blog.google/products/search/how-ai-powers-great-search-results/

^[^^archive.org^^]^ https://blog.google/products/search/google-search-breakthroughs-over-25-years/
This ensures that Google Search can better show people organic results and connect them to helpful resources. If you make a spelling mistake, or search for a term that’s not on a page but where the page has a close synonym, or if you aren’t even sure exactly how to search for something, our meaning matching systems help.

1411
90

Hey everyone. I made a casual survey to see if people can tell the difference between human-made and AI generated art. Any responses would be appreciated, I'm curious to see how accurately people can tell the difference (especially those familiar with AI image generation)

1412
13
submitted 1 year ago by cyu@sh.itjust.works to c/technology@lemmy.ml
1413
17
emojis of the world (emojis.tilt.computer)

Emojis from the world is an attempt to archive and analyze the most frequently used emojis in human communication. You can help the process by submitting your selection, don't be afraid, it's easy, it's fun and it's private, we only request non-sensitive data, which includes emojis (😅) your age and a nickname. We simply use your current IP address to gain a rough idea of your location, but we do not save it.

1414
48
submitted 1 year ago by tree@lemmy.zip to c/technology@lemmy.ml

*Russia's incremental moves to eliminate online privacy regularly target VPNs. To 'free' itself from Google and Apple, in 2022 Russia launched its very own app store, which ironically offers dozens of VPNs. After the government recently announced the mandatory pre-installation of RuStore on tech gadgets, a draft law will outlaw censorship-circumventing VPNs on RuStore. *

Russia has been tightening the noose on VPN services for years. Many non-compliant foreign companies exited Russia when faced with a choice; compromise your customers’ privacy, or else.

Any that remained were required to submit to state regulation, cooperate fully with the authorities, while ensuring that a massive list of domains and URLs censored by the state could not be accessed.

How that has played out on the ground in practical terms isn’t clear, but everything now points to a worsening situation that will almost certainly lead to even more censorship.

Google Play and Apple’s App Store ‘Replaced’ By RuStore

As Russia’s three-day ‘special military operation’ in Ukraine enters its 588th day, everything is going in accordance with the Kremlin’s plan. Indeed, even small inconveniences linked to sanctions and other minor irritants are being transformed into new opportunities for the Russian people.

Limited access to Google Play and Apple’s App Store, for example, prompted the launch of an all-new, independent Russian app store in May 2022.

As the image above shows, ‘guaranteed secure access to applications’ is delivered under the watchful eye of the Ministry of Digital Development. So whether people are influencing on Rossgram, meeting like-minded people on Topface, or doing their thing on InTokRUS, government support shouldn’t be too far behind.

read more: https://torrentfreak.com/russia-prepares-rustore-vpn-ban-after-declaring-rustore-installation-mandatory-231004/

archive: https://archive.ph/tEBfx

1415
94
submitted 1 year ago by tree@lemmy.zip to c/technology@lemmy.ml

The flagship Acela fleet may have to cut service because Amtrak is running out of spare parts and using unsupported software on critical components while the new fleet is nowhere close to entering service.


A scathing new report paints a bleak picture of Amtrak’s highly profitable Acela route: trains are running out of spare parts, the new fleet is years delayed and not even close to entering service, and the current fleet is being maintained by harvesting the corpses of old trains and running unsupported software on old circuit boards made by companies that no longer exist.

The report, released this week by the Amtrak Office of Inspector General, indicates that Amtrak may have to run less service on the Acela route from Washington D.C. to Boston via New York.. It calls into question the agency's ability to buy new trains while keeping to a schedule, even as it is in the midst of its most expensive train orders in history to replace a huge swath of the existing fleet thanks to an injection of funds from the 2021 Bipartisan Infrastructure Act.

In 2016, Amtrak bought 28 new train sets from Alstom to replace the current Acela fleet, which is now 25 years old. The train order, which cost $2.5 billion in total (including upgraded rail yards and maintenance facilities) and have a larger capacity than the current fleet and can go 10 mph faster, were supposed to enter service starting in 2021. That never happened.

According to the Amtrak OIG report, Alsom has made 12 of the train sets as well as 22 of the 28 café cars, but they all have defects, including windows that shatter “spontaneously.” But the report notes it is common for new train sets to have defects.

Alstom spokesperson Clifford Cole said in a statement, “We are surprised with the so-called ‘defects’ that the OIG report identifies” and added that the “modifications” Amtrak has requested “are in no way in the critical path of completion of this project.”

The bigger problem that is causing the years of delay has to do with the way new trains are tested before entering service. The new Acela fleet uses a tilt-technology familiar to anyone who has ridden high-speed rail in Europe or Asia that allows the trains to round bends at more extreme angles. Because this is the first fleet to use that technology in the U.S., the Federal Railroad Administration has stringent testing requirements to ensure it is safe. This includes the use of advanced computer modeling before running the trains on the tracks.

According to the Amtrak OIG report, the computer model is the source of the hold-up. Alstom built most of the trains before the model was ready and the FRA still has not approved it. The most recent model submitted to the FRA in July, the 14th attempt, was in some ways worse than previous models, according to the report. Until the FRA validates the model, the trains cannot move forward with real-world testing, much less be put into service.

The report is critical of Amtrak and Alstom for allowing the trains to be built before the model was complete, as well as for not allowing Department of Transportation officials to view the model’s source code to help Alstom get it into shape. Alstom has declined to do so citing trade secrets, according to the report.

Cole, the Alstom spokesperson, said the company is “the world leader in high-speed trains” with 2,300 such trains in service with Alstom parts and technology. Cole said the company is working closely with Amtrak to get them into service. Cole disputed that it is unusual to build trains before the validation of the testing model.

The Amtrak OIG paints a dim picture of Acela service in the coming years. Amtrak’s latest announcement said the new trains would enter service in 2024, but the report implies that is unlikely (the subtitle of the report includes the phrase “Additional Delays and Cost Increases are Likely”). In an addendum to the report, Amtrak management said it agreed with the report’s findings and recommendations.

Moreover, the current Acela fleet is being maintained with spare parts harvested from old trains. Some of the companies that made parts that go into the current fleet no longer exist. The software used to run the train control system on the current Acela fleet is from the mid-1990s and the circuit board that runs the software was made by a company that no longer exists. Delays and trip unreliability are likely to increase as these fleets are duct-taped together while the new, untested fleet sits in the Philadelphia station rail yard.

link: https://www.vice.com/en/article/k7zpgv/a-computer-model-is-causing-years-of-delays-for-amtraks-new-high-speed-trains-scathing-audit-finds

1416
125
1417
28
submitted 1 year ago by cyu@sh.itjust.works to c/technology@lemmy.ml
1418
3
Delta Chat email (www.ircwebnet.com)
submitted 1 year ago by aktarus@slrpnk.net to c/technology@lemmy.ml
1419
13
1420
71
1421
16
submitted 1 year ago by cyu@sh.itjust.works to c/technology@lemmy.ml
1422
57
submitted 1 year ago by cyu@sh.itjust.works to c/technology@lemmy.ml
1423
245
submitted 1 year ago by Wilshire@lemmy.ml to c/technology@lemmy.ml
1424
79
1425
8

This article describes how to setup keyboard shortcuts in QubesOS so that you can temporarily disarm (pause) the BusKill laptop kill cord.

This allows the user to, for example, go to the bathroom without causing their computer to shutdown or self-destruct.

Arm & Disarm BusKill in QubesOS

This is a guide that builds on part one: A Laptop Kill Cord for QubesOS (1/2). Before reading this, you should already be familiar with how to setup udev rules for BusKill on QubesOS.

  1. A Laptop Kill Cord for QubesOS (1/2)
  2. Disarm BusKill in QubesOS (2/2)

ⓘ Note: This post is adapted from its original article on Tom Hocker's blog.

What is BusKill?

What if someone literally steals your laptop while you're working with classified information inside a Whonix DispVM? They'd also be able to recover data from previous DispVMs--as Disposable VM's rootfs virtual files are not securely shredded after your DispVM is destroyed.

Are you a security researcher, journalist, or intelligence operative that works in QubesOS--exploiting Qubes' brilliant security-through-compartimentalization to keep your data safe? Do you make use of Whonix Disposable VMs for your work? Great! This post is for you.

I'm sure your QubesOS laptop has Full Disk Encryption and you're using a strong passphrase. But what if someone literally steals your laptop while you're working with classified information inside a Whonix DispVM? Not only will they get access to all of your AppVM's private data and the currently-running Whonix DispVM's data, but there's a high chance they'd be able to recover data from previous DispVMs--as Disposable VM's rootfs virtual files (volatile.img) are not securely shredded after your DispVM is destroyed by Qubes!

Let's say you're a journalist, activist, whistleblower, or a human rights worker in an oppressive regime. Or an intelligence operative behind enemy lines doing research or preparing a top-secret document behind a locked door. What do you do to protect your data, sources, or assets when the secret police suddenly batter down your door? How quickly can you actually act to shutdown your laptop and shred your RAM and/or FDE encryption keys?

BusKill Demo
Watch the BusKill Explainer Video for more info youtube.com/v/qPwyoD_cQR4

BusKill utilizes a magnetic trip-wire that tethers your body to your laptop. If you suddenly jump to your feet or fall off your chair (in response to the battering ram crashing through your door) or your laptop is ripped off your table by a group of armed thugs, the data bus' magnetic connection will be severed. This event causes a configurable trigger to execute.

The BusKill trigger can be anything from:

  1. locking your screen or
  2. shutting down the computer or
  3. initiating a self-destruct sequence

While our last post described how to setup such a system in QubesOS with BusKill, this post will describe how to add keyboard shortcuts to arm & disarm the dead man switch (eg so you can go to the bathroom).

Disclaimer

This guide contains experimental files, commands, and software. The information contained in this article may or may not lead to corruption or total permanent deletion of some or all of your data. We've done our best to carefully guide the user so they know the risks of each BusKill trigger, but we cannot be responsible for any data loss that has occurred as a result of following this guide.

The contents of this guide is provided openly and is licensed under the CC-BY-SA license. The software included in this guide is licensed under the GNU GPLv3 license. All content here is consistent with the limitations of liabilities outlined in its respective licenses.

We highly recommend that any experiments with the scripts included in this article are used exclusively on a disposable machine containing no valuable data.

If data loss is a concern for you, then leave now and do not proceed with following this guide. You have been warned.

Release Note

Also be aware that, due to the risks outlined above, BusKill will not be released with this "self-destruct" trigger.

If you purchase a BusKill cable, it will only ship with non-destructive triggers that lock the screen or shutdown the computer. Advanced users can follow guides to add additional destructive triggers, such as the one described in this post, but they should do so at their own risk--taking carefully into consideration all of the warnings outlined above and throughout this article.

Again, if you buy a BusKill cable, the worst that can happen is your computer will abruptly shutdown.

Assumptions

This guide necessarily makes several assumptions outlined below.

sys-usb

In this guide, we assume that your QubesOS install has a USB-Qube named 'sys-usb' for handling USB events on behalf of dom0.

If you decided to combine your USB and networking Qubes at install time, then replace all references in this guide for 'sys-usb' to 'sys-net'.

If you decided to run your 'sys-usb' VM as a DispoableVM at install time, then replace all references in this guide for 'sys-usb' its Disposable TemplateVM (eg 'fedora-36-dvm').

..And if you chose not to isolate your USB devices, then may god help you.

Udev Device Matching

BusKill in Linux uses udev to detect when the USB's cable is severed. The exact udev rule that you use in the files below will depend on the drive you choose to use in your BusKill cable.

In this guide, we identify our BusKill-specific drive with the 'ENV{ID_MODEL}=="Micromax_A74"' udev property. You should replace this property with one that matches your BusKill-specific drive.

To determine how to query your USB drive for device-specific identifiers, see Introducing BusKill: A Kill Cord for your Laptop. Note that the `udevadm monitor --environment --udev` command should be run in the 'sys-usb' Qube.

ⓘ Note: If you'd prefer to buy a BusKill cable than make your own, you can buy one fully assembled here.

QubesOS Version

This guide was written for QubesOS v4.1.

[user@dom0 ~]$ cat /etc/redhat-release Qubes release 4.1.2 (R4.1)
[user@dom0 ~]$

BusKill Files

This section will describe what files should be created and where.

Due to the design of QubesOS, it takes a bit of mental gymnastics to understand what we're doing and why. It's important to keep in mind that, in QubesOS

  1. The keyboard and UI are configured in 'dom0'
  2. USB devices (like the BusKill device) are routed to the 'sys-usb' VM
  3. dom0 has the privilege to execute scripts inside other VMs (eg 'sys-usb')
  4. By design, VMs should *not* be able to send arbitrary commands to be executed in dom0
  5. ...but via the qubes-rpc, we can permit some VMs (eg 'sys-usb') to execute a script in dom0 (though for security reasons, ideally such that no data/input is sent from the less-trusted VM to dom0 -- other than the name of the script)

Due to the constraints listed above:

  1. We'll be configuring the disarm button as keyboard shortcut in dom0
  2. We'll be saving and executing the 'buskill-disarm.sh' script in 'sys-usb' (because these scripts manipulate our udev rules)
  3. The keyboard shortcut in dom0 will actually be executing the above script in 'sys-usb'

sys-usb

If you followed our previous guide to setting-up BusKill in QubesOS, then you should already have a file in 'sys-usb' at '/rw/config/buskill.rules'. You may even have modified it to trigger a LUKS Self-Destruct on removal of your BusKill device.

Because you're now experimenting with a new setup, let's go ahead and wipe out that old file with a new one that just executes a soft-shutdown. You might need some days to get used to the new disarm procedure, and you probably don't want to suddenly loose all your data due to an accidental false-positive!

Execute the following on your 'sys-usb' Qube:

mv /rw/config/buskill.rules /rw/config/buskill.rules.bak.`date "+%Y%m%d_%H%M%S"`
cat << EOF | sudo tee /rw/config/buskill.rules
################################################################################
# File:    sys-usb:/etc/udev/rules.d/buskill.rules -> /rw/config/buskill.rules
# Purpose: Add buskill rules. For more info, see: https://buskill.in/qubes-os/
# Authors: Michael Altfield 
# Created: 2020-01-02
# License: GNU GPLv3
################################################################################
ACTION=="remove", SUBSYSTEM=="usb", ENV{ID_MODEL}=="Micromax_A74", RUN+="/usr/bin/qrexec-client-vm dom0 buskill.softShutdown"
EOF
sudo ln -s /rw/config/buskill.rules /etc/udev/rules.d/
sudo udevadm control --reload

Now, let's add a new udev '.rules' file. This one will always just lock your screen, and it's what will be put in-place when BusKill is "disarmed".

Execute the following on your 'sys-usb' Qube:

cat << EOF | sudo tee /rw/config/buskill.lock.rules
################################################################################
# File:    sys-usb:/etc/udev/rules.d/buskill.rules -> /rw/config/buskill.lock.rules
# Purpose: Just lock the screen. For more info, see: https://buskill.in/qubes-os/
# Authors: Michael Altfield 
# Created: 2023-05-10
# License: GNU GPLv3
################################################################################
ACTION=="remove", SUBSYSTEM=="usb", ENV{ID_MODEL}=="Micromax_A74", RUN+="/usr/bin/qrexec-client-vm dom0 buskill.lock"
EOF

The careful reader will see that we're not actually disarming BusKill in the same sense as our BusKill GUI app. Indeed, what we're actually going to do is swap these two files for 30 seconds.

This way, if BusKill is armed and you remove the cable, your computer shuts-down.

But if you want to disarm, the procedure becomes:

  1. Hit the "Disarm BusKill" keyboard shortcut (see below)
  2. Wait for the toast popup message indicating that BusKill is now disarmed
  3. Remove the cable within 30 seconds
  4. Your screen locks (instead of shutting down)

Personally, I can't think of a QubesOS user that would want to leave their machine unlocked when they go to the bathroom, so I figured this approach would work better than an actual disarm.

Bonus: when you return from your break, just plug-in the BusKill cable in, and it'll already be armed (reducing the risk of user error due to forgetting to arm BusKill).

Now, let's add the actual 'buskill-disarm.sh' script to disarm BusKill:

Execute the following on your 'sys-usb' Qube:

cat << EOF | sudo tee /usr/local/bin/buskill-disarm.sh
#!/bin/bash
 
################################################################################
# File:    sys-usb:/usr/local/bin/buskill-disarm.sh
# Purpose: Temp disarm BusKill. For more info, see: https://buskill.in/qubes-os/
# Authors: Tom 
# Co-Auth: Michael Altfield 
# Created: 2023-05-10
# License: GNU GPLv3
################################################################################
 
# replace the 'shutdown' trigger with the 'lock' trigger
sudo rm /etc/udev/rules.d/buskill.rules
sudo ln -s /rw/config/buskill.lock.rules /etc/udev/rules.d/buskill.rules
sudo udevadm control --reload
 
# let the user know that BusKill is now temporarily disarmed
notify-send -t 21000 "BusKill" "Disarmed for 30 seconds" -i changes-allow
 
# wait 30 seconds
sleep 30
 
# replace the 'lock' trigger with the 'shutdown' trigger
sudo rm /etc/udev/rules.d/buskill.rules
sudo ln -s /rw/config/buskill.rules /etc/udev/rules.d/buskill.rules
sudo udevadm control --reload
notify-send -t 5000 "BusKill" "BusKill is Armed" -i changes-prevent
EOF
sudo chmod +x /usr/local/bin/buskill-disarm.sh

dom0

If you followed our previous guide to setting-up BusKill in QubesOS, then you shouldn't need to add any files to dom0. What you do need to do is setup some keyboard shortcuts.

In the QubesOS GUI, click on the big Q "Start Menu" in the top-left of your XFCE panel to open the Applications menu. Navigate to 'System Tools' and click Keyboard

Screenshot of QubesOS with an arrow pointing to the "Q" Application Menu in the very top-left of the screen Screenshot of QubesOS Application Menu with "System Tools -> Keyboard" highlighted
Click the “Q” to open the QubesOS Application Menu Click System Tools -> Keyboard

Click the 'Application Shortcuts' Tab and then click the '+ Add' button on the bottom-left of the window.

Screenshot of QubesOS Keyboard Settings Window that shows the "Application Shortcuts" tab highlighted'alt Screenshot of QubesOS Keyboard Settings Window that shows the "+ Add" button highlightedalt
Click the “Application Shortcuts” tab to add a Keyboard Shortcut in Qubes Click the “Add” Button to add a new Keyboard Shortcut in Qubes

In the 'Command' input field, type the following

qvm-run sys-usb buskill-disarm.sh

The above command will execute a command in 'dom0' that will execute a command in 'sys-usb' that will execute the 'buskill-disarm.sh' script that we created above.

Screenshot of QubesOS Keyboard Settings Window that shows the \"OK\" button highlighted
After typing the command to be executed when the keyboard shortcut is pressed, click the "OK" button

Now click "OK" and, when prompted, type Ctrl+Shift+D (or whatever keyboard shortcut you want to bind to "Disarming BusKill").

Screenshot of QubesOS Keyboard Settings Window that shows the prompt "Press now the keyboard keys you want to use to trigger the command..."alt Screenshot of QubesOS Keyboard Settings Window that shows the selected Shortcut "Shift+Ctrl+D"
Type "Ctrl+Shift+D" or whatever keyboard shortcut you want to trigger BusKill to be disarmed for 30 seconds                                            

You should now have a keyboard shortcut binding for disarming BusKill!

Screenshot of QubesOSKeyboard Settings Window that shows the newly created keyboard shortcutfor \"Shift+Ctrl+D\" at the top of thelist

Test It!

At this point, you can test your new (temporary) BusKill Disarm functionality by:

  1. Plugging-in your BusKill cable
  2. Typing Ctrl+Shift+D
  3. Waiting for the toast popup message to appear indicating that BusKill is disarmed for 30 seconds
  4. Unplugging your BusKill cable

Your machine should lock, not shutdown.

Screenshot of QubesOS with a toast message in the top-right that says \"BusKill Disarmed for 30 Seconds\"
After hitting the keyboard shortcut to disarm BusKill, you have 30 seconds to remove the cable

After 30 seconds, return to your computer and test the normal "arm" functionality:

  1. Plug-in your BusKill cable
  2. Unlock your screen
  3. Unplug your BusKill cable

Your computer should shutdown, not lock.

Screenshot of QubesOS with a toast message in the top-right that says \"BusKill is Armed\"
30 seconds after hitting the keyboard shortcut, BusKill will arm itself

Troubleshooting

Is unplugging your USB device doing nothing? Having other issues?

See the Troubleshooting section in our original guide to using BusKill on QubesOS.

Limitations/Improvements

Security is porous. All software has bugs. Nothing is 100% secure. For more limitations to using BusKill on QubesOS, see the Limitations section in our original guide to using BusKill on QubesOS.

Buy a BusKill Cable

We look forward to continuing to improve the BusKill software and making BusKill more accessible this year. If you want to help, please consider purchasing a BusKill cable for yourself or a loved one. It helps us fund further development, and you get your own BusKill cable to keep you or your loved ones safe.

You can also buy a BusKill cable with bitcoin, monero, and other altcoins from our BusKill Store's .onion site.

Stay safe,
The BusKill Team
https://www.buskill.in/
http://www.buskillvampfih2iucxhit3qp36i2zzql3u6pmkeafvlxs3tlmot5yad.onion

view more: ‹ prev next ›

Technology

34720 readers
344 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS