1
25
Python 3.13.7 (www.python.org)
2
22
3
15
4
13

An exercise to help build the right mental model for Python data. The “Solution” link uses memory_graph to visualize execution and reveals what’s actually happening:

5
48
6
25
7
11

I'm working on a Python script to find a snappy Lemmy instance, as the one I was using (not dbzer0) has recently slowed down to about 5 seconds per page load. I have a rough version of the script, but I need to fetch a list of available Lemmy instances through their API.

Could anyone guide me on how to access the Lemmy API to retrieve a list of instances?

8
13

Some struggle with recursion, but as package invocation_tree visualizes the Python call tree in real time, it gets easy to understand what is going on and to debug any remaining issues.

See this one-click Quick Sort demo in the Invocation Tree Web Debugger.

9
12
10
20

Better understand the Python Data Model or Data Structures by memory_graph visualization with just one click:

11
7
12
2

cross post from: https://programming.dev/post/37369902

Hi everyone,

we, the iceoryx community, just released iceoryx2 v0.7, an ultra-low latency inter-process communication framework for Rust, C, C++ and with this release, Python.

If you are into robotics, embedded real-time systems (especially safety-critical), autonomous vehicles or just want to hack around, iceoryx2 is built with you in mind.

Check out our release announcement for more details: https://ekxide.io/blog/iceoryx2-0-7-release

And the link to the project: https://github.com/eclipse-iceoryx/iceoryx2

13
23
14
25
15
53
Python Mutability (programming.dev)

See the Solution and Explanation.

16
16
Python Packaging Ecosystem Survey 2025 (anaconda.surveymonkey.com)
17
20
Introducing NeoSQLite (www.youtube.com)
submitted 1 month ago by cwt@lemmy.ml to c/python@programming.dev

Introducing NeoSQLite — a Python library with an API highly compatible with PyMongo, allowing you to use SQLite almost like MongoDB.
https://github.com/cwt/neosqlite This project integrates two of my other open-source projects:

  • fts5-icu-tokenizer (GitHub - cwt/fts5-icu-tokenizer: FTS5 ICU Tokenizer for SQLite (mirror) (https://github.com/cwt/fts5-icu-tokenizer)): An ICU-powered tokenizer for SQLite's FTS5, enabling full-text search support for languages worldwide. In NeoSQLite, it powers the $text operator for advanced multilingual search capabilities.

  • quez (GitHub - cwt/quez: Pluggable, compressed in-memory queues and deques for both sync and asyncio applications. (https://github.com/cwt/quez)): A tool that compresses SQLite query results, reducing memory usage by 50% to 80%, which is especially beneficial when working with large datasets.

NeoSQLite is ideal for lightweight, embeddable applications that need MongoDB-like query flexibility with the simplicity and portability of SQLite.

Feel free to check it out and share your feedback.

18
13
Python: The Documentary (www.youtube.com)
19
29
Memory Graph Web Debugger (programming.dev)
submitted 1 month ago* (last edited 1 month ago) by bterwijn@programming.dev to c/python@programming.dev

Hi, I'm new, I'd like to share my new Memory Graph Web Debugger that you can use to visualize and debug your Python data structures with just one click. This is an example of a binary tree implementation. I feel this tool could level up Python education. I'm interested in your thoughts about it, feedback welcome.

20
15

I have been working for a while on a easy way to run some javascript tools from Python. Primarily to be able to use transpilers implemented in JS itself in my python applications pipeline.

That lead to the creation of DukPy ( github.com/amol-/dukpy ), which is met my needs because is very straightforward to install without any external dependencies and provides a good support of modern Javascript.

The project is only a few github stars away from the 512 badge, and as useless as that target is I thought that it was a good excuse to share the project here if anyone was looking for something similar

21
3
22
3
23
28
submitted 1 month ago* (last edited 1 month ago) by promitheas@programming.dev to c/python@programming.dev

Hello guys, Ive built a super simple web app to convert google maps urls to the osm format, mainly for my personal use but also for anyone who has this use case as well.

Most link formats work, but there are a couple that have been giving me particular trouble. Here are the formats that work as expected:

https://www.google.com/maps/place/Eiffel+Tower/@48.8584,2.2945,17z
https://google.com/maps/place/Statue+of+Liberty/@40.6892,-74.0445,17z
https://maps.app.goo.gl/SoDFgcPJKVPeZXqd8
https://www.google.co.uk/maps/place/Buckingham+Palace/@51.5014,-0.1419,17z
https://www.google.de/maps/place/Brandenburger+Tor/@52.5163,13.3777,17z

These 2 formats dont work on the server, but if I run the app locally they work and give a correct osm formatted link:

https://maps.app.goo.gl/jXqDkM2NWN55kZcd9?g_st=com.google.maps.preview.copy
https://maps.google.com/maps?q=Big+Ben%2C+London

You can test it on the actual website (gmaps2osm.de), but essentially if you try to convert links of this format on the actual website you get an Internal Server Error after around 10 seconds of it thinking.

Right now Im focusing on the .preview.copy formatted links. As you will see from the code below, I am using playwright to headlessly handle googles annoying redirects and consent forms:

from typing import final
from flask import Flask, request, render_template, jsonify
from datetime import datetime
import re
import urllib.parse
import requests
from playwright.sync_api import sync_playwright

# Test urls for regex:
# https://www.google.com/maps/place/Eiffel+Tower/@48.8584,2.2945,17z
# https://google.com/maps/place/Statue+of+Liberty/@40.6892,-74.0445,17z
# https://maps.google.com/maps?q=Big+Ben%2C+London # has an issue, doesnt work
# https://maps.app.goo.gl/4jnZLELvmpvBmFvx8
# https://maps.app.goo.gl/jXqDkM2NWN55kZcd9?g_st=com.google.maps.preview.copy
# https://www.google.co.uk/maps/place/Buckingham+Palace/@51.5014,-0.1419,17z
# https://www.google.de/maps/place/Brandenburger+Tor/@52.5163,13.3777,17z

debug_enabled = False
is_headless = not debug_enabled
print(f"is_headless: {is_headless}")
log_file_path = ""
app = Flask(__name__)

GMAPS_URL_RE = re.compile(
	r"""(?x)  # Verbose mode
	^https?://
	(
		(www\.)?
		(google\.[a-z.]+/maps)			  |		# www.google.com/maps, google.co.uk/maps, etc.
		(maps\.google\.[a-z.]+)			  |		# maps.google.com
		(goo\.gl/maps)					  |		# goo.gl/maps
		(maps\.app\.goo\.gl)					# maps.app.goo.gl
		# (maps\.app\.goo\.gl.*\.preview\.copy)	# maps.app.goo.gl(...).preview.copy
	)
	""",
	re.IGNORECASE
)

# ---- Dispatcher
def extract_coordinates(url: str):
	if ".preview.copy" in url:
		return handle_preview_copy_url(url)
	if "maps?q=" in url:
		return handle_browser_resolved_url(url)
	else:
		return handle_standard_url(url)

# ---- Handler: preview.copy links
def handle_preview_copy_url(url: str):
	url = url.split('?')[0]
	input_url = resolve_initial_redirect(url)
	if "consent.google.com" in input_url:
		parsed = urllib.parse.urlparse(input_url)
		query = urllib.parse.parse_qs(parsed.query)
		continue_url = query.get("continue", [""])[0]
		if continue_url:
			input_url = urllib.parse.unquote(continue_url)
		else:
			log_msg("ERROR", "No 'continue' parameter found.")
			return None, input_url
	final_url = extract_with_playwright(input_url)
	coord = extract_coords_from_url(final_url)
	return coord, final_url

# ---- Handler: links with only a query containing place names (https://maps.google.com/maps?q=Big+Ben%2C+London)
def handle_browser_resolved_url(url: str):
	input_url = resolve_initial_redirect(url)
	if "consent.google.com" in input_url:
		parsed = urllib.parse.urlparse(input_url)
		query = urllib.parse.parse_qs(parsed.query)
		continue_url = query.get("continue", [""])[0]
		if continue_url:
			input_url = urllib.parse.unquote(continue_url)
		else:
			log_msg("ERROR", "No 'continue' parameter found.")
			return None, input_url
	final_url = extract_with_playwright(input_url)
	coord = extract_coords_from_url(final_url)
	return coord, final_url

# ---- Handler: standard links
def handle_standard_url(url: str):
	try:
		# Follow redirect to get final destination
		response = requests.head(url, allow_redirects=True, timeout=10)
		final_url = response.url
		log_msg("DEBUG", "Final URL:", final_url)
	except requests.RequestException as e:
		raise RuntimeError(f"Failed to resolve standard URL: {e}")
	coords = extract_coords_from_url(final_url)
	return coords, final_url

# ---- Utility: redirect resolver
def resolve_initial_redirect(url: str):
	try:
		response = requests.get(url, allow_redirects=True, timeout=10)
		return response.url
	except Exception as e:
		log_msg("ERROR", "Redirect failed: ", e)
		return url

# ---- Utility: use playwright to get the final rendered URL
def extract_with_playwright(url:str):
	with sync_playwright() as p:
		browser = p.chromium.launch(headless=is_headless)
		page = browser.new_page()
		page.goto(url)
		# Click reject button if necessary
		try:
			page.locator('button:has-text("Reject all")').first.click(timeout=5000)
		except:
			pass # No reject button
		page.wait_for_function(
				"""() => window.location.href.includes('/@')""",
				timeout=15000
		)
		final_url = page.url
		log_msg("DEBUG", "Final URL: ", final_url)
		browser.close()
		return final_url

# ---- Utility: extract coordinates with regex patterns
def extract_coords_from_url(url: str):
	patterns = [
		r'/@([-.\d]+),([-.\d]+)',				 # Matches /@lat,lon
		r'/place/([-.\d]+),([-.\d]+)',			 # Matches /place/lat,lon
		r'/search/([-.\d]+),\+?([-.\d]+)',
		r'[?&]q=([-.\d]+),([-.\d]+)',			 # Matches ?q=lat,lon
		r'[?&]ll=([-.\d]+),([-.\d]+)',			 # Matches ?ll=lat,lon
		r'[?&]center=([-.\d]+),([-.\d]+)',		 # Matches ?center=lat,lon
		r'!3d([-.\d]+)!4d([-.\d]+)'				 # Matches !3dlat!4dlon
	]
	for pattern in patterns:
		match = re.search(pattern, url)
		if match:
			return match.groups()
	return None

# ---- Utility: logging function to output and write to file
def log_msg(level: str, msg: str, optional_arg = None):
	ts = datetime.now()
	iso_ts = ts.isoformat()
	if level == "DEBUG" and debug_enabled == False:
		return
	if optional_arg:
		log_line = f"[{iso_ts}]:[{level}]: {msg} {optional_arg}"
	else:
		log_line = f"[{iso_ts}]:[{level}]: {msg}"
	print(log_line)
	with open(log_file_path+"gmaps2osm_logs.txt", "a") as log_file:
		log_file.write(log_line+"\n")

@app.route("/", methods=["GET", "POST"])
def index():
	result = {}
	if request.method == "POST":
		url = request.form.get("gmaps_url", "").strip()
		log_msg("DEBUG", "URL:", url)
		if not url:
			result["error"] = "Please enter a Google Maps URL."
			log_msg("DEBUG", "Not a URL")
		else:
			try:
				if not GMAPS_URL_RE.search(url):
					result["error"] = "Please enter a valid Google Maps URL."
				else:
					coords, final_url = extract_coordinates(url)
					log_msg("DEBUG", "coords:", coords)
					if coords:
						lat, lon = coords
						result["latitude"] = lat
						result["longitude"] = lon
						result["osm_link"] = f"https://osmand.net/map?pin=%7Blat%7D%2C%7Blon%7D#16/{lat}/{lon}"
					else:
						raise ValueError("No coordinates found in the URL.")
			except ValueError as e:
				result["error"] = f"Error resolving or parsing URL: {e}"
	return render_template("index.html", **result)

if __name__ == "__main__":
	app.run(debug=True)

You can view all the code at the github: https://github.com/promitheas17j/gmaps2osm

But essentially its running as a docker container (my first project using docker woowoo) and this is the output of the logs on the server when I enter such a link:

$ docker logs -f gmaps2osm
[2025-08-15 17:42:00,132] ERROR in app: Exception on / [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 1511, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 919, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 917, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 902, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
  File "/app/app.py", line 164, in index
    coords, final_url = extract_coordinates(url)
  File "/app/app.py", line 42, in extract_coordinates
    return handle_preview_copy_url(url)
  File "/app/app.py", line 61, in handle_preview_copy_url
    final_url = extract_with_playwright(input_url)
  File "/app/app.py", line 113, in extract_with_playwright
    page.wait_for_url("**/@**", timeout=15000)
  File "/usr/local/lib/python3.10/dist-packages/playwright/sync_api/_generated.py", line 9162, in wait_for_url
    self._sync(
  File "/usr/local/lib/python3.10/dist-packages/playwright/_impl/_sync_base.py", line 115, in _sync
    return task.result()
  File "/usr/local/lib/python3.10/dist-packages/playwright/_impl/_page.py", line 584, in wait_for_url
    return await self._main_frame.wait_for_url(**locals_to_params(locals()))
  File "/usr/local/lib/python3.10/dist-packages/playwright/_impl/_frame.py", line 263, in wait_for_url
    async with self.expect_navigation(
  File "/usr/local/lib/python3.10/dist-packages/playwright/_impl/_event_context_manager.py", line 33, in __aexit__
    await self._future
  File "/usr/local/lib/python3.10/dist-packages/playwright/_impl/_frame.py", line 239, in continuation
    event = await waiter.result()
playwright._impl._errors.TimeoutError: Timeout 15000ms exceeded.
=========================== logs ===========================
waiting for navigation to "**/@**" until 'load'
============================================================

This was when I gave this URL as input: https://maps.app.goo.gl/jXqDkM2NWN55kZcd9?g_st=com.google.maps.preview.copy

Anyone have any idea why playwright is throwing errors at me?

Thanks in advance, I really appreciate any help you can give me. Ive been banging my head about this issue on and off the past couple of months.

24
14
how to use async for? (discuss.tchncs.de)

what does async for do? or, how do async iterators in general work, because I'm pretty sure async for just helps you iterate on an async iterator.

I think async iterators have a method __anext__() which "steps through the iterable"? I don't understand what stepping through means.

Also __anext__() returns an awaitable? what does that mean, especially if I have something like an AsyncIterator[Object], what does the awaitable have to do with the object?

25
9
view more: next ›

Python

7507 readers
4 users here now

Welcome to the Python community on the programming.dev Lemmy instance!

📅 Events

PastNovember 2023

October 2023

July 2023

August 2023

September 2023

🐍 Python project:
💓 Python Community:
✨ Python Ecosystem:
🌌 Fediverse
Communities
Projects
Feeds

founded 2 years ago
MODERATORS