Unlike many other tools, VOID is not just another note-taking app.

no offense. but if I got a penny for everytime Ive heard this....

the plot seems so random that it probably fucks. In a good way

while I had my doubts when seeing the thumbnail, i think this has potential to be quite fun.

the textures look kind if poor, but I guess that rolls with the concept of the game.

the dessert looks like straight from breaking bad, so I guess you can fetch some storyline quirks from there.

but really. put some work into the cover image. based from the image I thought this would be another generic low effort mobile game ad.

it only enabled its operators access to encrypted communications...

what the heck? isnt this much worse than simple microphone access?

To be honest. (although I am guilty using chatgpt way too often) I have never not found a question + an answer to a similar problem on stackoverflow.

The realm is saturated. 90 % of the common questions are answered. Complex problems which are not yet asked and answered are probably too difficult to formulate on stackoverflow.

It should be kept at what it is. An enormous repository of knowledge.

17
database greenhorn (discuss.tchncs.de)
submitted 7 months ago* (last edited 7 months ago) by PoisonedPrisonPanda@discuss.tchncs.de to c/programming@programming.dev

hi my dears, I have an issue at work where we have to work with millions (150 mln~) of product data points. We are using SQL server because it was inhouse available for development. however using various tables growing beyond 10 mln the server becomes quite slow and waiting/buffer time becomes >7000ms/sec. which is tearing our complete setup of various microservices who read, write and delete from the tables continuously down. All the stackoverflow answers lead to - its complex. read a 2000 page book.

the thing is. my queries are not that complex. they simply go through the whole table to identify any duplicates which are not further processed then, because the processing takes time (which we thought would be the bottleneck). but the time savings to not process duplicates seems now probably less than that it takes to compare batches with the SQL table. the other culprit is that our server runs on a HDD which is with 150mb read and write per second probably on its edge.

the question is. is there a wizard move to bypass any of my restriction or is a change in the setup and algorithm inevitable?

edit: I know that my questions seems broad. but as I am new to database architecture I welcome any input and discussion since the topic itself is a lifetime know-how by itself. thanks for every feedbach.

[-] PoisonedPrisonPanda@discuss.tchncs.de 60 points 2 years ago* (last edited 2 years ago)

So many publications are not worth reading.

Im all in for a revolution of science.

No more bullshitting. No "800 words" required.

If youre able to explain something in 5 sentences and put a table and plot with the results. Do it. No need to elaborate in 5 pages how fucked up your ability is to use thesaurus for synonyms.

Edit. Usually i read the headline and put the article into my bibtex library.

Jokes aside.

It really is exhausting to always seek for knowledge.

I think everybody needs a little bit of silence in his head once in a while.

I think steam in general is a proof that its a service issue

being 3 years in his phd

This meme become even better

im fucking impressed

Im amazed by peoples creativity.

I havent thought until now that such things like translations can be misused for hate speech.

view more: next ›

PoisonedPrisonPanda

joined 2 years ago