Postdoc in engineering research - we’re using machine learning to predict chemical properties relevant to combustion, speeding up the discovery of cleaner liquid fuels as we transition away from fossil fuels!
I use a few used Dell Optiplex 7050 Micros, they’re great for the price (and have a small footprint too!)
Edit: for storage I have a HP MicroServer Gen. 10 plus
Self-hosting lemmy.blue!
YouTube TV and Spotify. There’s a workaround for everything else!
From what I've seen, Pythorhead is focusing on "higher-level" functions, while Plemmy is focusing on LemmyHttp API parity and returning request responses. Who knows, maybe we'll implement some more complex functionality in the future!
No problem, happy to help out the fediverse!
Very cool, I'll do some digging myself!
Interesting - it would be even more interesting if they provided some metrics as to how much time is being saved compared to RSS.
What's worse:
Meanwhile Kim Dotcom, the founder of Megaupload, is continuing to fight the U.S. charges and threat of extradition. He has said he expects his former colleagues to testify against him as part of the deal they struck.
Ortmann was sentenced to 2 years and 7 months while van der Kolk was sentenced to 2 years and 6 months. Each had faced a maximum sentence of 10 years in prison but argued they should be allowed to serve their sentences in home detention.
Does the punishment fit the "crime"?
He’s gonna run Twitter into the ground like you would your favorite car
Sort of - the models are able to predict numerical property values given a large amount of data to observe during training. In other words, given the scope of known data, we can extrapolate predictions for new data. The predictive capabilities of the model are only as reliable as the data used to train it, and unfortunately in our case we only have hundreds of samples per property, as opposed to other ML tasks with millions of samples. This highlights how much time it actually takes to find, synthesize, and experimentally test molecules!
Unfortunately neural networks, especially traditional multi-layered feed-forward networks, are often seen as a "black box" approach to regression and classification, where we don't really understand how a network learns or why its weights are tuned the way they are. Analysis methods have come a long way, but ambiguity still exists.
What we have done, however, is find the statistical significance of specific molecular substructures as they relate to combustion properties. For example, when we trained our models to predict sooting propensity (amount of pollution formed during combustion), we noticed that various algorithms such as random forest regression were putting a heck of a lot more weight into a molecular variable measuring path length (length of carbon chains, number of higher order bonds); from this, we were able to conclude that long-chain hydrocarbons with a higher number of double or triple bonds form more soot, and an idea of what mechanistic pathways we should stay away from when producing bio-oil.
As for fuel-grade molecules, we've found that furanic compounds and compounds with cyclohexane substructures generally have equal operating efficiency (cetane number), equal energy density (lower heating value, MJ/kg), operate well in various environments (optimal flash, boiling, and cloud points, deg. C), all while producing much less soot (yield sooting index) compared to diesel fuel. The next step is finding a cheap way to mass produce the stuff!
Recently we've started down the rabbit hole of fungus-derived bio-oils, terpenes (yes, those terpenes!) derived from fungus may be useful for use as soot-reducing fuel additives.