[-] pcouy@lemmy.pierre-couy.fr 1 points 1 day ago

I can recommend some stuff I've been using myself :

  • Dolibarr as an ERP + CRM : requires some work to configure initially. As most (if not all) features are disabled by default, it requires enabling them based on what you need. It also has a marketplace with a bunch of modules you can buy
  • Gitea to manage codebases for customer projects. It can also do CI but I've not looked into it yet
  • Prometheus and its ecosystem (mostly promtail and grafana) for monitoring and alerting
  • docker mail server : makes it quite easy to self host a full mail server. The guides in their doc made it painless for me to configure dmarc/SPF/other stuff that make e-mail notoriously hard to host
  • Cal.com as a self hostable alternative to calendly
  • Authentik for single sign-on and centralized permission management
  • plausible for lightweight analytics
  • a mix of wireguard, iptables and nginx to basically achieve the same as cloudflare proxying and tunnels

I design, deploy and maintain such infrastructures for my own customers, so feel free to DM me with more details about your business if you need help with this

36

publication croisée depuis : https://lemmy.pierre-couy.fr/post/653426

This is a guide I wrote for Immich's documentation. It features some Immich specific parts, but should be quite easy to adapt to other use cases.

It is also possible (and not technically hard) to self-host a protomaps release, but this would require 100GB+ of disk space (which I can't spare right now). The main advantages of this guide over hosting a full tile server are :

  • it's a single nginx config file to deploy
  • it saves you some storage space since you're only hosting tiles you've previously viewed. You can also tweak the maximum cache size to your needs
  • it is easy to configure a trade-off between map freshness and privacy by tweaking the cache expiration delay

If you try to follow it, please send me some feedback on the content and the wording, so I can improve it

89

This is a guide I wrote for Immich's documentation. It features some Immich specific parts, but should be quite easy to adapt to other use cases.

It is also possible (and not technically hard) to self-host a protomaps release, but this would require 100GB+ of disk space (which I can't spare right now). The main advantages of this guide over hosting a full tile server are :

  • it's a single nginx config file to deploy
  • it saves you some storage space since you're only hosting tiles you've previously viewed. You can also tweak the maximum cache size to your needs
  • it is easy to configure a trade-off between map freshness and privacy by tweaking the cache expiration delay

If you try to follow it, please send me some feedback on the content and the wording, so I can improve it

[-] pcouy@lemmy.pierre-couy.fr 29 points 1 week ago

In my experience, OnlyOffice has the best compatibility with M$ Office. You should try it if you haven't

[-] pcouy@lemmy.pierre-couy.fr 87 points 1 month ago

On this day, exactly 12 years ago (9:30 EDT 1 Aug 2012), was the most expensive software bug ever, in both terms of dollars per second and total lost. The company managed to pare losses through the heroics of Goldman Sachs, and “only” lost $457 million (which led to its dissolution).

Devs were tasked with porting their HFT bot to an upcoming NYSE API service that was announced to go live less than a 33 days in the future. So they started a death march sprint of 80 hour weeks. The HFT bot was written in C++. Because they didn't want to have to recompile once, the lead architect decided to keep the same exact class and method signature for their PowerPeg::trade() method, which was their automated testing bot that they had been using since 2003. This also meant that they did not have to update the WSDL for the clients that used the bot, either.

They ripped out the old dead code and put in the new code. Code that actually called real logic, instead of the test code, which was designed, by default, to buy the highest offer given to it.

They tested it, they wrote unit tests, everything looked good. So they decided to deploy it at 8 AM EST, 90 minutes before market open. QA testers tested it in prod, gave the all clear. Everyone was really happy. They'd done it. They'd made the tight deadline and deployed with just 90 minutes to spare...

They immediately went to a sprint standup and then sprint retro meeting. Per their office policy, they left their phones (on mute) at their desks.

During the retro, the markets opened at 9:30 EDT, and the new bot went WILD (!!) It just started buying the highest offer offered for all of the stocks in its buy list. The markets didn’t react very abnormally, becuase it just looked like they were bullish. But they were buying about $5 million shares per second… Within 2 minutes, the warning alarms were going on in their internal banking sector… a huge percentage of their $2.5 billion in operating cash was being depleted, and fast!

So many people tried to contact the devs, but they were in a remote office in Hoboken due to the high price of realestate in Manhattan. And their phones were off and no one was at their computer.

The CEO was seen getting people to run through the halls of the building, yelling, and finally the devs noticed. 11 minutes ahd gone by and the bots had bought over $3 billion of stock. The total cash reserves were depleted. The compnay was in SERIOUS trouble...

None of the devs could find the source of the bug. The CEO, desperate, asked for solutions. "KILL THE SERVERS!!" one of the devs shouted!!

They got techs @ the datacenter next to the NYSE building to find all 8 servers that ran the bots and DESTROYED them with fireaxes. Just ripping the wires out… And finally, after 37 minutes, the bots stopped trading. Total paper loss: $10.8 billion.

The SEC + NYSE refused to rewind the trades for all but 6 stocks, the on paper losses were still at $8 billion. No way they coudl pay. Goldman Sachs stepped in and offered to buy all the stocks @ a for-profit price of $457 million, which they agreed to. All in all, the company lost close to $500 million and all of its corporate clients left, and it went out of business a few weeks later.

Now what was the cause of the bug? Fat fingering human error during release.

The sysop had declined to implement CI/CD, which was still in its infancy, probably because that was his full-time job and he was making like $300,000 in 2012 dollars ($500k today). There were 8 servers that housed the bot and a few clients on the same servers.

The sysop had correctly typed out and pasted the correct rsync commands to get the new C++ binary onto the servers, except for server 5 of 8. In the 5th instance, he had an extra 5 in the server name. The rsync failed, but because he pasted all of the commands at once, he didn't notice...

Because the code used the exact same method signature for the trade() method, server 5 was happy to buy up the most expensive offer it was given, because it was running the Sad Path test trading software. If they had changed the method signature, it wouldn't have run and the bug wouldn't have happened.

At 9:43 EDT, the devs decided collectively to do a "rollback" to the previous release. This was the worst possible mistake, because they added in the Power Peg dead code to the other 7 servers, causing the problems to grow exponentially. Although, it took about 3 minutes for anyone in Finance to actually inform them. At that point, more than $50 million dollars per second was being lost due to the bug.

It wasn't until 9:58 EDT that the servers had all been destroyed that the trading stopped.

Here is a description of the aftermath:

It was not until 9:58 a.m. that Knight engineers identified the root cause and shut down SMARS on all the servers; however, the damage had been done. Knight had executed over 4 million trades in 154 stocks totaling more than 397 million shares; it assumed a net long position in 80 stocks of approximately $3.5 billion as well as a net short position in 74 stocks of approximately $3.15 billion.

28 minutes. $8.65 billion inappropriately purchased. ~1680 seconds. $5.18 million/second.

But after the rollback at 9:43, about $4.4 billion was lost. ~900 seconds. ~$49 million/second.

That was the story of how a bad software decision and fat-fingered manual production release destroyed the most profitable stock trading firm of the time, and was the most expensive software bug in human history.

[-] pcouy@lemmy.pierre-couy.fr 27 points 1 month ago

I think they do get marked as dead after the Bodis subdomain does not act as a Lemmy instance. But I was wondering if a large number of instances "waking up from the dead" and acting maliciously could cause some trouble. Or would such "undead" instances pose no more threat to the fediverse than the same number of newly created malicious instances ? I'm mainly thinking about stuff like being in a privileged position to DoS most instances at once, or impersonation of accounts that used to actually exist on these "undead" instances

79

publication croisée depuis : https://lemmy.pierre-couy.fr/post/584644

While monitoring my Pi-Hole logs today, I noticed a bunch of queries for XXXXXX.bodis.com, where XXXXXX are numbers. I saw a few variations for the numbers, each one being queried several times.

Digging further, I found out these queries were caused by CNAME records on domains that look like they used to point to Lemmy/Kbin instances.

From what I understand, domain owners can register a CNAME record to XXXXXX.bodis.com and earn some money from the traffic it receives. I guess that each number variation is a domain owner ID in Bodis' database. I saw between 5 to 10 different number variations, each one being pointed to by a bunch of old Lemmy domains.

This probably means that among actors who snatch expired domains, several of them have taken a specific interest with expired domains of old Lemmy instances. Another hypothesis is that there were a lot of domains registered for hosting Lemmy during the Reddit API debacle (about 1 year ago), which started expiring recently.

Are there any other instance admins who noticed the same thing ? Is any of my two hypothesis more plausible than the other ? Should we worry about this trend ?

Anyway, I hope this at least serves as a reminder to not let our domains expire ;)

175
submitted 1 month ago* (last edited 1 month ago) by pcouy@lemmy.pierre-couy.fr to c/fediverse@lemmy.world

While monitoring my Pi-Hole logs today, I noticed a bunch of queries for XXXXXX.bodis.com, where XXXXXX are numbers. I saw a few variations for the numbers, each one being queried several times.

Digging further, I found out these queries were caused by CNAME records on domains that look like they used to point to Lemmy/Kbin instances.

From what I understand, domain owners can register a CNAME record to XXXXXX.bodis.com and earn some money from the traffic it receives. I guess that each number variation is a domain owner ID in Bodis' database. I saw between 5 to 10 different number variations, each one being pointed to by a bunch of old Lemmy domains.

This probably means that among actors who snatch expired domains, several of them have taken a specific interest with expired domains of old Lemmy instances. Another hypothesis is that there were a lot of domains registered for hosting Lemmy during the Reddit API debacle (about 1 year ago), which started expiring recently.

Are there any other instance admins who noticed the same thing ? Is any of my two hypothesis more plausible than the other ? Should we worry about this trend ?

Anyway, I hope this at least serves as a reminder to not let our domains expire ;)

25

Cross-posted from : https://lemmy.pierre-couy.fr/post/581642

Context : Immich default map tile provider (which gets sent a bunch of PII every time you use the map feature) is a company that I see no reason to trust. This is a follow-up to this post, with the ~~permanent~~ temporary fix I came up with. I will also summarize the general opinion from the comments, as well as some interesting piece of knowledge that commenters shared.

Hacky fix

This will use Nginx proxy module to build a caching proxy in front of Open Street Map's tileserver and to serve a custom style.json for the maps.

This works well for me, since I already proxy all my services behind a single Nginx instance. It is probably possible to achieve similar results with other reverse proxies, but this would obviously need to be adapted.

Caching proxy

Inside Nginx's http config block (usually in /etc/nginx/nginx.conf), create a cache zone (a directory that will hold cached responses from OSM) :

http {
     # You should not need to edit existing lines in the http block, only add the line below
    proxy_cache_path /var/cache/nginx/osm levels=1:2 keys_zone=osm:100m max_size=5g inactive=180d;
}

You may need to manually create the /var/cache/nginx/osm directory and set its owner to Nginx's user (typically www-data on Debian based distros).

Customize the max_size parameter to change the maximum amount of cached data you want to store on your server. The inactive parameter will cause Nginx to discard cached data that's not been accessed in this duration (180d ~ 6months).

Then, inside the server block that serves your Immich instance, create a new location block :

server {
    listen 443 ssl;
    server_name immich.your-domain.tld;

    # You should not need to change your existing config, only add the location block below

    location /map_proxy/ {
        proxy_pass https://tile.openstreetmap.org/;
        proxy_cache osm;
        proxy_cache_valid 180d;
        proxy_ignore_headers Cache-Control Expires;
        proxy_ssl_server_name on;
        proxy_ssl_name tile.openstreetmap.org;
        proxy_set_header Host tile.openstreetmap.org;
        proxy_set_header User-Agent "Nginx Caching Tile Proxy for self-hosters";
        proxy_set_header Cookie "";
        proxy_set_header Referer "";
    }
}

Reload Nginx (sudo systemctl reload nginx). Confirm this works by visiting https://immich.your-domain.tld/map_proxy/0/0/0.png, which should now return a world map PNG (the one from https://tile.openstreetmap.org/0/0/0.png )

This config ignores cache control headers from OSM and sets its own cache validity duration (proxy_cache_valid parameter). After the specified duration, the proxy will re-fetch the tiles. 6 months seem reasonable to me for the use case, and it can probably be set to a few years without it causing issues.

Besides being lighter on OSM's servers, the caching proxy will improve privacy by only requesting tiles from upstream when loaded for the first time. This config also strips cookies and referrer before forwarding the queries to OSM, as well as set a user agent for the proxy following OSM foundation's guidelines (according to these guidelines, you should add a contact information to this user agent)

This can probably be made to work on a different domain than the one serving your Immich instance, but this probably requires to add the appropriate headers for CORS.

Custom style.json

I came up with the following mapstyle :

{
  "version": 8,
  "name": "Immich Map",
  "sources": {
    "immich-map": {
      "type": "raster",
      "tileSize": 256,
      "tiles": [
        "https://immich.your-domain.tld/map_proxy/{z}/{x}/{y}.png"
      ]
    }
  },
  "sprite": "https://maputnik.github.io/osm-liberty/sprites/osm-liberty",
  "glyphs": "https://fonts.openmaptiles.org/{fontstack}/{range}.pbf",
  "layers": [
    {
      "id": "raster-tiles",
      "type": "raster",
      "source": "immich-map",
      "minzoom": 0,
      "maxzoom": 22
    }
  ],
  "id": "immich-map-dark"
}

Replace immich.your-domain.tld with your actual Immich domain, and remember the absolute path you save this at.

One last update to nginx's config

Since Immich currently does not provide a way to manually edit style.json, we need to serve it from http(s). Add one more location block below the previous one :

location /map_style.json {
    alias /srv/immich/mapstyle.json;
}

Replace the alias parameter with the location where you saved the json mapstyle. After reloading nginx, your json style will be available at https://immich.your-domain.tld/map_style.json

Configure Immich to use this

For this last part, follow steps 8, 9, 10 from this guide (use the link to map_style.json for both light and dark themes). After clearing the browser or app's cache, the map should now be loaded from your caching proxy. You can confirm this by tailing Nginx's logs while you zoom and move around the map in Immich

Summary of comments from previous post

Self-hosting a tile server is not realistic in most cases

People who have previously worked with maps seem to confirm that there are no tile server solution lightweight enough to be self hosted by hobbyists. There is maybe some hope with generating tiles on demand, but someone with deep knowledge of the file formats involved in the process should confirm this.

Some interesting links were shared, which seem to confirm this is not realistically self-hostable with the available software :

General sentiment about this issue

In all this part, I want to emphasize that while there seems to be a consensus, this is only based on the few comments from the previous post and may be biased by the fact that we're discussing it on a non-mainstream platform. If you disagree with anything below, please comment this post and explain your point of view.

  • Nobody declared that they had noticed the requests to a third-party server before
  • A non-negligible fraction of Immich users are interested in the privacy benefits over other solutions such as Google photos. These users do not like their self-hosted services to send requests to third-party servers without warning them first
  • The fix should consist of the following :
    • Clearly document the implications of enabling the map, and any feature that sends requests to third parties
    • Disable by default features that send requests to third parties (especially if it contains any form of geolocated data)
    • Provide a way to easily change the tile provider. A select menu with a few pre-configured style.json would be nice, along with a way to manually edit style.json (or at least some of its fields) directly from the Immich config page
[-] pcouy@lemmy.pierre-couy.fr 25 points 1 month ago* (last edited 1 month ago)

At this point, I'll just assume you are trolling and stop replying after this comment.

This post is trying to provide a generic solution to the fact that there are no reasonable way to get map tiles without relying on a third party provider.

I additionally included instructions on how to set it up with Immich, but I don't see how a caching proxy in front of OSM should be part of Immich, a software focused on managing photo libraries.

[-] pcouy@lemmy.pierre-couy.fr 43 points 1 month ago

Blocking the DNS was the first thing I did. This is intended to restore the map feature without having to trust a random company I've never heard of.

What do you mean by "a diff of a code fix" that would be simpler ?

136

Context : Immich default map tile provider (which gets sent a bunch of PII every time you use the map feature) is a company that I see no reason to trust. This is a follow-up to this post, with the ~~permanent~~ temporary fix I came up with. I will also summarize the general opinion from the comments, as well as some interesting piece of knowledge that commenters shared.

Hacky fix

This will use Nginx proxy module to build a caching proxy in front of Open Street Map's tileserver and to serve a custom style.json for the maps.

This works well for me, since I already proxy all my services behind a single Nginx instance. It is probably possible to achieve similar results with other reverse proxies, but this would obviously need to be adapted.

Caching proxy

Inside Nginx's http config block (usually in /etc/nginx/nginx.conf), create a cache zone (a directory that will hold cached responses from OSM) :

http {
     # You should not need to edit existing lines in the http block, only add the line below
    proxy_cache_path /var/cache/nginx/osm levels=1:2 keys_zone=osm:100m max_size=5g inactive=180d;
}

You may need to manually create the /var/cache/nginx/osm directory and set its owner to Nginx's user (typically www-data on Debian based distros).

Customize the max_size parameter to change the maximum amount of cached data you want to store on your server. The inactive parameter will cause Nginx to discard cached data that's not been accessed in this duration (180d ~ 6months).

Then, inside the server block that serves your Immich instance, create a new location block :

server {
    listen 443 ssl;
    server_name immich.your-domain.tld;

    # You should not need to change your existing config, only add the location block below

    location /map_proxy/ {
        proxy_pass https://tile.openstreetmap.org/;
        proxy_cache osm;
        proxy_cache_valid 180d;
        proxy_ignore_headers Cache-Control Expires;
        proxy_ssl_server_name on;
        proxy_ssl_name tile.openstreetmap.org;
        proxy_set_header Host tile.openstreetmap.org;
        proxy_set_header User-Agent "Nginx Caching Tile Proxy for self-hosters";
        proxy_set_header Cookie "";
        proxy_set_header Referer "";
    }
}

Reload Nginx (sudo systemctl reload nginx). Confirm this works by visiting https://immich.your-domain.tld/map_proxy/0/0/0.png, which should now return a world map PNG (the one from https://tile.openstreetmap.org/0/0/0.png )

This config ignores cache control headers from OSM and sets its own cache validity duration (proxy_cache_valid parameter). After the specified duration, the proxy will re-fetch the tiles. 6 months seem reasonable to me for the use case, and it can probably be set to a few years without it causing issues.

Besides being lighter on OSM's servers, the caching proxy will improve privacy by only requesting tiles from upstream when loaded for the first time. This config also strips cookies and referrer before forwarding the queries to OSM, as well as set a user agent for the proxy following OSM foundation's guidelines (according to these guidelines, you should add a contact information to this user agent)

This can probably be made to work on a different domain than the one serving your Immich instance, but this probably requires to add the appropriate headers for CORS.

Custom style.json

I came up with the following mapstyle :

{
  "version": 8,
  "name": "Immich Map",
  "sources": {
    "immich-map": {
      "type": "raster",
      "tileSize": 256,
      "tiles": [
        "https://immich.your-domain.tld/map_proxy/{z}/{x}/{y}.png"
      ]
    }
  },
  "sprite": "https://maputnik.github.io/osm-liberty/sprites/osm-liberty",
  "glyphs": "https://fonts.openmaptiles.org/{fontstack}/{range}.pbf",
  "layers": [
    {
      "id": "raster-tiles",
      "type": "raster",
      "source": "immich-map",
      "minzoom": 0,
      "maxzoom": 22
    }
  ],
  "id": "immich-map-dark"
}

Replace immich.your-domain.tld with your actual Immich domain, and remember the absolute path you save this at.

One last update to nginx's config

Since Immich currently does not provide a way to manually edit style.json, we need to serve it from http(s). Add one more location block below the previous one :

location /map_style.json {
    alias /srv/immich/mapstyle.json;
}

Replace the alias parameter with the location where you saved the json mapstyle. After reloading nginx, your json style will be available at https://immich.your-domain.tld/map_style.json

Configure Immich to use this

For this last part, follow steps 8, 9, 10 from this guide (use the link to map_style.json for both light and dark themes). After clearing the browser or app's cache, the map should now be loaded from your caching proxy. You can confirm this by tailing Nginx's logs while you zoom and move around the map in Immich

Summary of comments from previous post

Self-hosting a tile server is not realistic in most cases

People who have previously worked with maps seem to confirm that there are no tile server solution lightweight enough to be self hosted by hobbyists. There is maybe some hope with generating tiles on demand, but someone with deep knowledge of the file formats involved in the process should confirm this.

Some interesting links were shared, which seem to confirm this is not realistically self-hostable with the available software :

General sentiment about this issue

In all this part, I want to emphasize that while there seems to be a consensus, this is only based on the few comments from the previous post and may be biased by the fact that we're discussing it on a non-mainstream platform. If you disagree with anything below, please comment this post and explain your point of view.

  • Nobody declared that they had noticed the requests to a third-party server before
  • A non-negligible fraction of Immich users are interested in the privacy benefits over other solutions such as Google photos. These users do not like their self-hosted services to send requests to third-party servers without warning them first
  • The fix should consist of the following :
    • Clearly document the implications of enabling the map, and any feature that sends requests to third parties
    • Disable by default features that send requests to third parties (especially if it contains any form of geolocated data)
    • Provide a way to easily change the tile provider. A select menu with a few pre-configured style.json would be nice, along with a way to manually edit style.json (or at least some of its fields) directly from the Immich config page
[-] pcouy@lemmy.pierre-couy.fr 29 points 1 month ago

Quoting one dev from the conversation I had on Discord :

the one run by OSM is not intended for general purpose use because that results in way too much load on their system. We used to use theirs, but as Immich grew we decided that we should relieve them of that

I guess you (and they) are talking about raster tiles, since OSM does not seem to provide vector tiles

[-] pcouy@lemmy.pierre-couy.fr 37 points 1 month ago* (last edited 1 month ago)

When I mentionned that "I can confirm it is not realistic to self-host a tile provider", it's because I tried to run maptiler : it maxed out my CPU for 2 hours before my disk got filled while trying to generate the tiles from OSM data (and it was just for France)

Edit : Anyway, I don't think this should be in Immich's scope. Simply providing an easy option to switch tile providers would allow people motivated enough to host maptiler to use it

Edit bis : More details on how hard it is to host your own tile provider are available on the official OSM wiki

270
submitted 1 month ago* (last edited 1 month ago) by pcouy@lemmy.pierre-couy.fr to c/selfhosted@lemmy.world

Update : I made a follow-up post containing a Nginx-based solution to cache map tiles from OSM and limit the amount of PII you send

While monitoring the logs in Rethink DNS (awesome app BTW) today, I noticed the Immich app making requests to api-l.cofractal.com.

After reaching out on Immich's discord, the devs explained to me that it is used as a tile provider for the map feature. I can confirm it is not realistic to self-host a tile provider without heavily tuning down the level of details on the map (which would still require a lot of disk space and CPU time). I understand the need for a third-party service to provide the map tiles, but I'm concerned by this one.

Visiting cofractal.com only tells us that they're selling APIs. I did not find any details about the company, not even the country they're registered in. The website is also missing informations about what they are logging or not. Everything else seems gated behind a login page, but they "are not currently accepting new customers". The whois for the domain says they're in California. Digging a bit more, I find AS26073 which apparently is the same company.

This bothers me, because Cofractal gets sent every location you viewed (and the zoom level) on Immich's map, along with your client's IP address and a "Referrer" header pointing to your Immich instance. This sounds like a lot of PII to me. It's also behind cloudflare which gets to see the same stuff.

When asked about it, one dev (thanks to them for almost instantly replying to every concern/question I threw at them) explained that they personally know the people behind Cofractal. According to this Immich dev, Cofractal provides free access to its paid service to Immich's user base as a way to support the project, with the side benefit of load testing their platform.

This explanations seems plausible and reasonable to me. However, I do not personally know the people behind Cofractal, and by default, I do not trust for-profit companies to act in an altruistic way. Here's a summary of everything that makes me uneasy about this company :

  • it does not say anything about the kind of data they are logging or not
  • it requires digging through whois records to find the most basic info about the company
  • it freely provides access to its normally paid service (for the whole Immich user base), but it does not communicate about it (or it is really hard to find)
  • it does not communicate about anything : searching for its name only returns its home page and websites with informations on Autonomous Systems
  • it is "not currently accepting new [paying] customers" while providing the service for free to a quite large user base (Immich v1.109.2 got 170k downloads in 5 days, v1.108.0 got 438k downloads in 13 days )
  • It is not mentioned anywhere in the whole immich.app website (searching for site:immich.app "cofractal" gave me no result). Not even a "Thank You" or "Sponsor" note on the homepage for the free API
  • (it is behind cloudflare)

The dev I talked to encouraged me to create a feature request, and seemed favorable to adding a switch for disabling maps client side. It is already possible to disable it server-wide, and the "URL to a style.json map theme" option seems to provide a way to customize the tile provider. Which leads to this post : I'm trying to collect feedback on this before creating the feature request.

  • It should be made prominently clear to server admins that leaving maps enabled causes clients to send requests to a third party-server and give details about what is sent (viewed locations, zoom level, IP address, Immich instance URL). The Post Install Steps in the docs and a paragraph above the switch on the config page seem like good places to me. Are there other/more appropriate place for such a warning ?
  • The "URL to a style.json map theme" option should probably be renamed to make it clearer that it allows changing tile providers. Or better yet, it could be reworked to make it easier to choose which third-party you decide to trust
  • What do you think about the idea of providing instance admins with a list of choices for tile providers ? Maybe with a short pros/cons list in the docs for each provider. I'd be fine with using a more reputable provider with the extra step of configuring my own API key (which would probably require proxying requests to the tile provider to not share the API key with all clients)
  • Should the Immich server proxy requests to the tile provider in any case ? Since the tile provider has access to the Referrer and Origin headers (which is probably required for CORS), they are currently able to link user IP addresses with Immich instances. Proxying requests with the Immich server should prevent that.
  • I would go as far as making maps disabled by default for new installs. I understand that "disabling by default would be a significant downgrade for a majority of users", but I feel like there's a strong overlap between the self-hosting and privacy communities. So we should at least have some debate about it

I've also been told that I'm the first one to raise concerns about this, which leads to one more question : Did nobody complain because nobody noticed ? Or are my concerns unjustified ?

22
submitted 2 months ago* (last edited 2 months ago) by pcouy@lemmy.pierre-couy.fr to c/france@jlai.lu

Pour référence : https://etudiant.lefigaro.fr/article/bac-philo-2023-qui-de-raphael-enthoven-ou-chatgpt-redige-la-meilleure-copie_a694c010-0a09-11ee-bd34-f2c2eadd1748/

(désolé pour le sponsor de la vidéo qui apparaît dans l'aperçu généré par lemmy)

6
submitted 3 months ago by pcouy@lemmy.pierre-couy.fr to c/france@jlai.lu
[-] pcouy@lemmy.pierre-couy.fr 118 points 3 months ago

Downvoted for cropping out the reference to the original...

[-] pcouy@lemmy.pierre-couy.fr 33 points 5 months ago* (last edited 5 months ago)

The worst thing about eclipse I've had to deal with is its git integration. The conflict resolution tool is awful and half the terminology diverges from plain git.

The fact that it has a "Push & Commit" button also drives me mad far more than it should

[-] pcouy@lemmy.pierre-couy.fr 32 points 6 months ago* (last edited 6 months ago)

What's up with all the shilling posts lately?

This has existed since at least 2018 according to their Twitter, and is related to crypto currencies through its Radworks DAO

Edit : I'm not saying OP themselves is a shill. Radicle did a pretty goog job at hiding its cryptocurrency ties. They even renamed their token from Radicle to Radworks a few years ago. It seems like cryptobros are adapting to the fact that being related to cryptocurrencies hinders adoption among technical people.

[-] pcouy@lemmy.pierre-couy.fr 51 points 6 months ago

For anyone who wonders, this is related to cryptocurrencies

13

I am trying to come-up with a reusable template to quickly start new projects using my prefered tools and frameworks, and I'm happy with what I got. However, using Docker is quite new for me and I've probably done some weird or unconventional stuff in my docker-compose.yml or my Dockerfiles. I'd love to learn from people with more experience with Docker, so feel free to tell me everything that is wrong with my setup.

I'm more confident about the stuff I did with Python/Django and Nuxt, but all criticism is welcome. This also applies to the readme : I'd like to provide detailed instructions about working with this project template, so please report anything that is unclear or missing.

Thank you to anyone who takes the time to check it out and help me make this useful to as many people as possible.

3

In a well-intentioned yet dangerous move to fight online fraud, France is on the verge of forcing browsers to create a dystopian technical capability. Article 6 (para II and III) of the SREN Bill would force browser providers to create the means to mandatorily block websites present on a government provided list. Such a move will overturn decades of established content moderation norms and provide a playbook for authoritarian governments that will easily negate the existence of censorship circumvention tools.

28
view more: next ›

pcouy

joined 1 year ago