2201
-43
submitted 1 year ago by ram@lemmy.ca to c/technology@lemmy.ml
2202
0

What happened in Marketing today!

1/ Twitter & Elon Musk have gone rogue as Elon musk introduces new limits to Twitter.

Unverified accounts can read only 600 posts a day & Verified ones 6,000. And new unverified ones with 300 posts limit.

2/ Google Universal Analytics says goodbye to all the marketers.

  • new update: New GA4 users will not have access to some attribution models.

3/ Attest & Modernretail’s research shows since the TikTok US ban issues. Instead of consumption decrease, 32% increase was seen.

4/ WPP partners with Contentful, A growing Marketing CRM & Data platform.

5/ Pinterest is going deep with targeting, As the platform is building a patent to find audience interest through email data of user.

6/ Google Introduces Search Page Checkout option for businesses. No need to visit the website!

7/ New Reddit API Policies are also live now!

8/ Budlight reintroduces the BudKnight. And disapproves allegations of firing top Marketing Executives.

9/ New Endorsement FTC policies for Influencers need to have more clear paid partnership policies for Influencer content.

  • it is manadtory for Non-EU Bloggers & Influencers with US viewers.

10 / Link from your website or blog to Twitter, now requires sign-In first. No chances of users viewing the content simply.

11/ Huge Fitness Influencer Joe Aesthetics Dead at age of 30. (Not marketing, but RIP)

12/ Adobe & Walmart are the highest paying brands amoong Influencer marketing brands.

I hope you liked the breakdown curated from The Social Juice Newsletter. I will see you on Monday with new updates.

2203
1
2204
2
2205
9
submitted 1 year ago by coldv@lemmy.world to c/technology@lemmy.ml

🍿

2206
3
submitted 1 year ago by Barns@lemmy.world to c/technology@lemmy.ml
2207
17

Probably not but one can hope… to the Fediverse lemmings!

2208
13
submitted 1 year ago by Cheebus@lemmy.world to c/technology@lemmy.ml
2209
6

a Louis Rossmann video.

2210
80
submitted 1 year ago by moeka89@lemm.ee to c/technology@lemmy.ml
2211
1
submitted 1 year ago by const_void@lemmy.ml to c/technology@lemmy.ml
2212
0
2213
17
2214
55
2215
8
submitted 1 year ago by gronjo45@lemm.ee to c/technology@lemmy.ml

Hey everyone! So I've been doing some playing around with Mint Linux and have quite enjoyed it in the virtual machine. Thank all of you for the insight into the mindset I should take when approaching a new distribution.

Now that I'm not struggling as much with the terminal and other general computer organizational problems, I wanted to learn how to train my own chat-bot assistants. These assistants would be trained on monographs, textbooks, and other scholarly resources on topics I've been trying to learn more deeply.

I was wondering if anyone here has done this before, and if you have any advice to lend me!

Thanks for all the help!

2216
24

https://web.archive.org/web/20230630111531/https://rockylinux.org/news/keeping-open-source-open/

ArticleEvery user of Rocky Linux is valued and their contributions matter. From software engineers to IT professionals and hobbyists, together, we are all part of the Linux and open source community. The Rocky Enterprise Software Foundation was established based on our shared vision that open source software should remain stable, accessible to all, and managed by the community.

This commitment is ingrained in everything we do. Since the inception of the Rocky project, we have prioritized reproducibility, transparency in decision-making, and that no single vendor or company can ever hold the project hostage. When we first started, we discussed our model and mission, and we decided not to bisect the Enterprise Linux community. Instead, in the spirit of open source principles and standards, we created something compatible with Red Hat Enterprise Linux (RHEL). By adhering to this approach, we adhere to a single standard for Enterprise Linux and align ourselves with the original goals of CentOS.

However, Red Hat has recently expressed their perspective that they ”do not find value in a RHEL rebuild.” While we believe this view is narrow-minded, Red Hat has taken a strong stance and limited access to the sources for RHEL to only their paying customers. These sources primarily consist of upstream open source project packages that are not owned by Red Hat.

Previously, we obtained the source code for Rocky Linux exclusively from the CentOS Git repository as they recommended. However, this repository no longer hosts all of the versions corresponding to RHEL. Consequently, we now have to gather the source code from multiple sources, including CentOS Stream, pristine upstream packages, and RHEL SRPMs.

Moreover, Red Hat’s Terms of Service (TOS) and End User License Agreements (EULA) impose conditions that attempt to hinder legitimate customers from exercising their rights as guaranteed by the GPL. While the community debates whether this violates the GPL, we firmly believe that such agreements violate the spirit and purpose of open source. As a result, we refuse to agree with them, which means we must obtain the SRPMs through channels that adhere to our principles and uphold our rights.

The latency of this status update has been due to our desire to balance the needs of the community and technical requirements, with the challenges to open source and community principles that Red Hat has created. Fortunately, there are alternative methods available to obtain source code, and we would like to highlight two examples:

One option is through the usage of UBI container images which are based on RHEL and available from multiple online sources (including Docker Hub). Using the UBI image, it is easily possible to obtain Red Hat sources reliably and unencumbered. We have validated this through OCI (Open Container Initiative) containers and it works exactly as expected.

Another method that we will leverage is pay-per-use public cloud instances. With this, anyone can spin up RHEL images in the cloud and thus obtain the source code for all packages and errata. This is the easiest for us to scale as we can do all of this through CI pipelines, spinning up cloud images to obtain the sources via DNF, and post to our Git repositories automatically.

These methods are possible because of the power of GPL. No one can prevent redistribution of GPL software. To reiterate, both of these methods enable us to legitimately obtain RHEL binaries and SRPMs without compromising our commitment to open source software or agreeing to TOS or EULA limitations that impede our rights. Our legal advisors have reassured us that we have the right to obtain the source to any binaries we receive, ensuring that we can continue advancing Rocky Linux in line with our original intentions.

While we continuously explore other options, the aforementioned approaches are subject to change. However, our unwavering dedication and commitment to open source and the Enterprise Linux community remain steadfast.

In the unfortunate event that Red Hat decides to ramp up efforts to negatively impact the community, Rocky Linux will persist to continue serving the best interests of the entire open source community.

As a reminder, we welcome everyone to contribute to our efforts. You can learn more about how you can join us and all of the various ways to contribute on our wiki. Want to voice your support for Rocky Linux? Help us spread the word by sharing with your network, engaging with or contributing to the community, or telling friends about us. Our community is vital to our success, and we value your support. Together, we can make Rocky Linux continue to thrive!

2217
7
2218
3
submitted 1 year ago by ardi60@lemmy.ml to c/technology@lemmy.ml
2219
63
submitted 1 year ago by adude007@lemmy.ml to c/technology@lemmy.ml
2220
1
submitted 1 year ago by Blaed@lemmy.world to c/technology@lemmy.ml

Welcome to the FOSAI Nexus!

(v0.0.1 - Summer 2023 Edition)

The goal of this knowledge nexus is to act as a link hub for software, applications, tools, and projects that are all FOSS (free open-source software) designed for AI (FOSAI).

If you haven't already, I recommend bookmarking this page. It is designed to be periodically updated in new versions I release throughout the year. This is due to the rapid rate in which this field is advancing. Breakthroughs are happening weekly. I will try to keep up through the seasons while including links to each sequential nexus post - but it's best to bookmark this since it will be the start of the content series, giving you access to all future nexus posts as I release them.

If you see something here missing that should be added, let me know. I don't have visibility over everything. I would love your help making this nexus better. Like I said in my welcome message, I am no expert in this field, but I teach myself what I can to distill it in ways I find interesting to share with others.

I hope this helps you unblock your workflow or project and empowers you to explore the wonders of emerging artificial intelligence.

Consider subscribing to /c/FOSAI if you found any of this interesting. I do my best to make sure you stay in the know with the most important updates to all things free open-source AI.

Find Us On Lemmy!

!fosai@lemmy.world


Fediverse Resources

Lemmy


Large Language Model Hub

Download Models

oobabooga

text-generation-webui - a big community favorite gradio web UI by oobabooga designed for running almost any free open-source and large language models downloaded off of HuggingFace which can be (but not limited to) models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and many others. Its goal is to become the AUTOMATIC1111/stable-diffusion-webui of text generation. It is highly compatible with many formats.

Exllama

A standalone Python/C++/CUDA implementation of Llama for use with 4-bit GPTQ weights, designed to be fast and memory-efficient on modern GPUs.

gpt4all

Open-source assistant-style large language models that run locally on your CPU. GPT4All is an ecosystem to train and deploy powerful and customized large language models that run locally on consumer-grade processors.

TavernAI

The original branch of software SillyTavern was forked from. This chat interface offers very similar functionalities but has less cross-client compatibilities with other chat and API interfaces (compared to SillyTavern).

SillyTavern

Developer-friendly, Multi-API (KoboldAI/CPP, Horde, NovelAI, Ooba, OpenAI+proxies, Poe, WindowAI(Claude!)), Horde SD, System TTS, WorldInfo (lorebooks), customizable UI, auto-translate, and more prompt options than you'd ever want or need. Optional Extras server for more SD/TTS options + ChromaDB/Summarize. Based on a fork of TavernAI 1.2.8

Koboldcpp

A self-contained distributable from Concedo that exposes llama.cpp function bindings, allowing it to be used via a simulated Kobold API endpoint. What does it mean? You get llama.cpp with a fancy UI, persistent stories, editing tools, save formats, memory, world info, author's note, characters, scenarios, and everything Kobold and Kobold Lite have to offer. In a tiny package around 20 MB in size, excluding model weights.

KoboldAI-Client

This is a browser-based front-end for AI-assisted writing with multiple local & remote AI models. It offers the standard array of tools, including Memory, Author's Note, World Info, Save & Load, adjustable AI settings, formatting options, and the ability to import existing AI Dungeon adventures. You can also turn on Adventure mode and play the game like AI Dungeon Unleashed.

h2oGPT

h2oGPT is a large language model (LLM) fine-tuning framework and chatbot UI with document(s) question-answer capabilities. Documents help to ground LLMs against hallucinations by providing them context relevant to the instruction. h2oGPT is fully permissive Apache V2 open-source project for 100% private and secure use of LLMs and document embeddings for document question-answer.


Image Diffusion Hub

Download Models

StableDiffusion

Stable Diffusion is a text-to-image diffusion model capable of generating photo-realistic and stylized images. This is the free alternative to MidJourney. It is rumored that MidJourney originates from a version of Stable Diffusion that is highly modified, tuned, then made proprietary.

SDXL (Stable Diffusion XL)

With Stable Diffusion XL, you can create descriptive images with shorter prompts and generate words within images. The model is a significant advancement in image generation capabilities, offering enhanced image composition and face generation that results in stunning visuals and realistic aesthetics.

ComfyUI

A powerful and modular stable diffusion GUI and backend. This new and powerful UI will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart-based interface.

ControlNet

ControlNet is a neural network structure to control diffusion models by adding extra conditions. This is a very popular and powerful extension to add to AUTOMATIC111's stable-diffusion-webui.

TemporalKit

An all-in-one solution for adding Temporal Stability to a Stable Diffusion Render via an automatic1111 extension. You must install FFMPEG to path before running this.

EbSynth

Bring your paintings to animated life. This software can be used in conjunction with StableDiffusion + ControlNet + TemporalKit workflows.

WarpFusion

A TemporalKit alternative to produce video effects and animation styling.


Training & Education

LLMs

Diffusers


Bonus Recommendations

AI Business Startup Kit

LLM Learning Material from the Developer of SuperHOT (kaiokendev):

Here are some resources to help with learning LLMs:

Andrej Karpathy’s GPT from scratch

Huggingface’s NLP Course

And for training specifically:

Alpaca LoRA

Vicuna

Community training guide

Of course for papers, I recommend reading anything on arXiv’s CS - Computation & Language that looks interesting to you: https://arxiv.org/list/cs.CL/recent.


Support Developers!

Please consider donating, subscribing to, or buying a coffee for any of the major community developers advancing Free Open-Source Artificial Intelligence.

If you're a developer in this space and would like to have your information added here (or changed), please don't hesitate to message me!

TheBloke

Oobabooga

Eric Hartford

kaiokendev


Major FOSAI News & Breakthroughs


Looking for other open-source projects based on these technologies? Consider checking out this GitHub Repo List I made based on stars I have collected throughout the last year or so.

2221
2

Huge news for AMD fans and those who are hoping to see a real* open alternative to CUDA that isn't OpenCL!

*: Intel doesn't count, they still have to get their shit together in rendering things correctly with their GPUs.

We plan to expand ROCm support from the currently supported AMD RDNA 2 workstation GPUs: the Radeon Pro v620 and w6800 to select AMD RDNA 3 workstation and consumer GPUs. Formal support for RDNA 3-based GPUs on Linux is planned to begin rolling out this fall, starting with the 48GB Radeon PRO W7900 and the 24GB Radeon RX 7900 XTX, with additional cards and expanded capabilities to be released over time.

2222
1
submitted 1 year ago by Blaed@lemmy.world to c/technology@lemmy.ml

cross-posted from: https://lemmy.world/post/809672

A very exciting update comes to koboldcpp - an inference software that allows you to run LLMs on your PC locally using your GPU and/or CPU.

Koboldcpp is one of my personal favorites. Shoutout to LostRuins for developing this application. Keep the release memes coming!

koboldcpp-1.33 Ultimate Edition Release Notes

A.K.A The "We CUDA had it all edition"

The KoboldCpp Ultimate edition is an All-In-One release with previously missing CUDA features added in, with options to support both CL and CUDA properly in a single distributable. You can now select CUDA mode with --usecublas, and optionally low VRAM using --usecublas lowvram. This release also contains support for OpenBLAS, CLBlast (via --useclblast), and CPU-only (No BLAS) inference.

Back ported CUDA support for all prior versions of GGML file formats for CUDA. CUDA mode now correctly supports every single earlier version of GGML files, (earlier quants from GGML, GGMF, GGJT v1, v2 and v3, with respective feature sets at the time they were released, should load and work correctly.)

Ported over the memory optimizations I added for OpenCL to CUDA, now CUDA will use less VRAM, and you may be able to use even more layers than upstream in llama.cpp (testing needed).

Ported over CUDA GPU acceleration via layer offloading for MPT, GPT-2, GPT-J and GPT-NeoX in CUDA.

Updated Lite, pulled updates from upstream, various minor bugfixes. Also, instruct mode now allows any number of newlines in the start and end tag, configurable by user.

Added long context support using Scaled RoPE for LLAMA, which you can use by setting --contextsize greater than 2048. It is based off the PR here ggerganov#2019 and should work reasonably well up to over 3k context, possibly higher.

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller. Alternatively, drag and drop a compatible ggml model on top of the .exe, or run it and manually select the model in the popup dialog.

...once loaded, you can connect like this (or use the full koboldai client): http://localhost:5001

For more information, be sure to run the program with the --help flag.

If you found this post interesting, please consider subscribing to the /c/FOSAI community at !fosai@lemmy.world where I do my best to keep you in the know with the most important updates in free open-source artificial intelligence.

Interested, but not sure where to begin? Try starting with Your Lemmy Crash Course to Free Open-Source AI

2223
4
submitted 1 year ago by ptz@dubvee.org to c/technology@lemmy.ml

Although watching TV shows from the 1970s suggests otherwise, the era wasn't completely devoid of all things resembling modern communication systems. Sure, the 50Kbps modems that the ARPANET ran on were the size of refrigerators, and the widely used Bell 103 modems only transferred 300 bits per second. But long-distance digital communication was common enough, relative to the number of computers deployed. Terminals could also be hooked up to mainframe and minicomputers over relatively short distances with simple serial lines or with more complex multidrop systems. This was all well known; what was new in the '70s was the local area network (LAN). But how to connect all these machines?

2224
1

Virgin Galactic will be launching their first commercial, sub-orbital space flight today. Link is to the Live Stream for the event.

2225
1
submitted 1 year ago by Blaed@lemmy.world to c/technology@lemmy.ml

cross-posted from: https://lemmy.world/post/800062

Eric Hartford (a.k.a. faldore) has announced OpenOrca, an open-source dataset and series of instruct-tuned language models he plans to release alongside Microsoft's new open-source challenger, Orca.

You can support Eric and all of the hard work he has done for the open-source community by following his newsletter on his site here.

Eric, if you're reading this and would like to share a donation link - I would be more than happy to include it on this post and any future regarding your work. Shoot me a message anytime.

Eric Hartford's Announcement

Today I'm announcing OpenOrca.

https://erichartford.com/openorca

https://twitter.com/erhartford/status/1674214496301383680

The dataset is completed. ~1mil of GPT4 augmented flanv2 instructions and ~3.5mil of GPT3.5 augmented flanv2 instructions.

We are currently training on LLaMA-13b. We expect completion in about 2 weeks.

When training is complete, we will release the dataset and the model at the same time.

We are seeking GPU compute sponsors for various targets, please consult the blog post and reach out if interested.

Thank you to our sponsors!

https://chirper.ai

https://preemo.io

https://latitude.sh

A few more highlights from the full article, which you should read here when you have a chance.

We expect to release OpenOrca-LLaMA-13b in mid-July 2023. At that time we will publish our evaluation findings and the dataset.

We are currently seeking GPU compute sponsors for training OpenOrca on the following platforms:

Falcon 7b, 40b

LLaMA 7b, 13b, 33b, 65b

MPT-7b, 30b

Any other targets that get a sponsor. (RWKV, OpenLLaMA)

Dataset consists of:

  • ~1 million of FLANv2 augmented with GPT-4 completions

  • ~3.5 million of FLANv2 augmented with GPT-3.5 completions

If you found this post interesting, please consider subscribing to the /c/FOSAI community at !fosai@lemmy.world where I do my best to keep you in the know with the most important updates in free open-source artificial intelligence.

view more: ‹ prev next ›

Technology

34745 readers
152 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS