PyCon Israel 2025 - Conference
Schedule
AI agents can plan, act, and adapt, or at least, they appear to. Under the hood, they remain fragile systems built on prediction rather than true understanding. This talk explores what LLM agents are really capable of today, where they break, and how to design around their limits. Through real-world failure patterns and success stories, we’ll unpack the myths, highlight common traps, and share practical tools and evaluation techniques for building agents that are genuinely useful. Finally, we’ll look ahead to how these systems may evolve, and what you can do right now to stay ahead as agent-powered development becomes the norm.
Speed up your pipelines by doing less! We’ll explore memory efficient caching, filtering, and take a deep dive into the often-overlooked Bloom filter — with practical examples to avoid unnecessary IO and computation.
Python has become powerful, but data pipelines, workloads, and API servers often suffer from unnecessary IO and redundant computation that slow things down.
In this talk, we’ll explore two essential techniques to speed up workloads by doing less: caching and filtering—and how to implement them efficiently, with relatively low memory overhead, along with real-world use cases where these techniques made an impact.
We’ll also take a closer look at an often overlooked and misunderstood tool: the Bloom filter. You’ll learn how it works, when it’s useful (and when it’s not), and how it helps you maintain a low memory footprint while effectively avoiding unnecessary database queries, API calls, or heavy computation—before they even happen.
Whether you're building data pipelines, APIs, or wrangling large datasets, this talk will give you practical insights and Pythonic tools to write smarter, faster, and more memory-conscious code.
Explore advanced enum usage in Python: define methods, attach metadata, and create enums dynamically. Learn the benefits and pitfalls of the enum singleton to write more expressive and maintainable code.
While working on the open-source hdate library (which I maintain) and on internal tools at Intel, I came across some lesser-known but powerful uses of Python enums. From attaching extra data and methods to make code more expressive, to dynamically generating enums from configuration files, these patterns can add clarity to your code if used with care. In this talk, I’ll share these techniques, their advantages, and the subtle pitfalls to watch out for when using a singleton data structure.
Code reloading is fun! Python's semantics for def, class complicate it. There are multiple approaches and multiple libraries; I don't think any is one-size-fits-all. This talk will teach you how they work to make informed choice for your code.
Edit->reload cycle on running software may be more fun & productive than exiting and losing state. Python's semantics for class and defcreating a new object are not as friendly to reloading as Smalltalk/Lisp/Ruby which patch in-place. Reloading is still very possible, with multiple approaches and libraries but I believe you better understand the issues and implementation tradeoffs. - What importlib.reload() does and does not. - Copied references: from ... import ..., instances, callbacks & closures, etc. - => Recording what-imported-what dependency graph. - => Patching classes/functions in-place vs. Updating references? Limitations. - A secret weapon: gc.get_referrers() - What IPython's %autoreload, jurigged, limeade do? - Renames/deletions. Problem of intent. => jurigged AST diffing?! - Top-level code, singletons, derived values. => Hard. Idempotent try: except NameError: style.
Decorators are one of the most powerful and expressive features in Python, yet they can be confusing for many developers. In this talk, we’ll demystify decorators by exploring how they work, how to write them, and how to use them effectively.
This talk is designed for intermediate Python developers who want to take their skills to the next level by mastering one of Python’s most versatile features: decorators.
We’ll begin by revisiting the concept of functions as first-class objects, which sets the foundation for understanding how decorators operate. From there, we’ll walk through writing simple decorators, then progress to more advanced topics such as: + Decorators with arguments + Stacking multiple decorators + Class-based decorators + Decorators applied to methods and classes
Real-world examples will highlight how decorators are used in real-world applications, such as web development, logging, access control, and benchmarking.
Attendees will leave this talk with a clear mental model of decorators, the ability to confidently implement them, and inspiration to apply them in their own projects. This talk will also provide best practices and tips for keeping decorator logic clean, testable, and maintainable.
This talk can be delivered in Hebrew (my mother tongue) and English.
Servers crash, containers restart, and services fail. This talk introduces Durable Python: a way to make workflows survive infrastructure failures, crucial for distributed systems, using AST tricks and durable execution platforms.
Modern Python workflows are more distributed than ever: orchestrating APIs, cloud services, microservices, and databases across environments where infrastructure is not always reliable. When a server crashes, a container restarts, or a service call fails mid-process, traditional Python scripts often must restart from scratch, risking duplicated work, data inconsistencies, or lost progress.
Durable Python changes this. It introduces a model where workflow state is preserved, and execution can automatically resume from the point of failure, without manual recovery, complex retry logic, or redundant operations. This talk will cover: Why infrastructure failures are inevitable — and why Python needs built-in durability to handle them.
The core principles of durable execution: state persistence, fault recovery, and reliable orchestration.
Practical examples and patterns for introducing durability into real-world Python automations, including CI pipelines, DevOps processes, microservice orchestration, and long-running AI agents.
How Durable Python works under the hood — leveraging durable execution platforms, transforming Python's AST, converting non-deterministic calls into trackable activities (like automatic checkpoints), and orchestrating everything seamlessly.
We’ll also live-demo a long-running Python workflow, simulate real infrastructure failures, and show the process resuming exactly where it left off — without re-executing completed steps or losing state.
A new technique is presented where objects turn immutable upon first hash calculation. This approach enables convenient mutability up until the object is used as a dict-key or set-item, without compromising on safety beyond that point.
Python's hash-based collections rely on an implicit contract: once an object is used as a dictionary key or added to a set, its hash value (and thereby contents) should never change. Yet Python's design encourages mutable user-defined classes without enforcing immutability when hashing occurs - leading to subtle bugs when objects modify their state after being hashed.
This talk introduces the "Lazy-Freeze" pattern: a technique where objects automatically transition from mutable to immutable upon their first hash calculation. Unlike the traditional approach which requires immutable construction, Lazy-Freeze allows objects to begin their lifecycle with convenient mutability, then seamlessly lock their state when stability becomes critical.
We'll introduce an implementation mainly using Python's hash and setattr magic methods, discuss examples and performance implications, and explore some sharp corners.
Explore Python’s groundbreaking shift beyond the Global Interpreter Lock (GIL). Understand the design tradeoffs, challenges, and performance impacts of running Python without the GIL.
The Global Interpreter Lock (GIL) has been a key part of Python's design, simplifying memory management but limiting parallelism in multi-threaded programs. Recent changes now allow Python to run without the GIL, unlocking true parallelism.
In this talk, we'll explore the implications of this shift: the internal changes to Python, the tradeoffs made for thread safety and performance, and the challenges overcome in the process. We’ll compare execution with and without the GIL through examples and benchmarks, and discuss the potential impact on Python developers and the ecosystem.
Whether you’re interested in Python’s internals, concurrency, or its evolving design, this session will provide a concise and practical overview of one of Python’s most significant updates.
Securing application is often done using cryptography, but if you don't do it right, it may be broken and you wouldn't even know it. Learn the common cryptographic mistakes in Python and how to fix them using safe practices.
Implementing cryptography is like handling a loaded weapon — powerful, but dangerous in the wrong hands. In this talk, we’ll explore how to properly implement cryptography in Python, using real-world examples of code that led to serious security vulnerabilities. From insecure random number generation and broken key management to misusing cryptographic primitives and rolling your own protocols, we’ll walk through the most common (and often subtle) mistakes developers make. We’ll also cover the correct approaches using modern Python libraries. If you need to use cryptography in your code for secure communication, encrypting data at rest, or just curious to understand the meaning of the inputs to the cryptographic function, this session will equip you with the knowledge to do cryptography right — or at least know when to call in an expert.
What if sound could drive movement? This talk introduces a Python-based program I developed that transforms audio into motion patterns - built for animatronics, optimized for embedded devices, and demonstrated through a raven named Samuel.
How do you get an animatronic raven to speak like a real one - complete with emotion, variation, and natural presence? In this talk, we'll explore how I built a Python-powered audio analysis system that brings Samuel, my interactive animatronic raven, to life. The goal wasn’t just to react to sound, but to create believable, nuanced motion driven by real audio data.
The challenge: to translate raven vocalizations into natural beak movements, without manually scripting the motion for each sound file. I needed a system that could analyze audio and generate movement instructions automatically - flexible, scalable, and Python-driven.
Using libraries like librosa, numpy, and matplotlib, I built a system that analyzes raven sounds, detects vocal energy peaks, and generates servo-ready movement instructions. To guide development, I created visual graphs of the audio - including RMS energy, volume, and frequency - to better understand how sound translated into motion. The program also generates multiple movement maps per sound clip, selecting one at random at runtime to avoid repetitive, robotic behavior.
We'll dive into: - Audio signal analysis with librosa (RMS, STFT, clustering). - Generating servo-ready binary movement patterns from raw audio. - Designing variability through randomized motion mapping. - Visualizing sound energy. - Controlling servos in real time using Python and Raspberry Pi.
We all use asyncio, but not everyone truly understand what happens behind the await.
In this talk, we’ll dive into advanced patterns, real-world pitfalls, and debugging strategies for developers who want to move beyond the basics.
We all use asyncio - but let’s be real: how many of us actually understand what’s happening behind the scenes when we await something?
The moment you step off the happy path - into timeouts, cancellations, and juggling dozens of async tasks - things get a little chaotic, fast.
This talk is a deep dive into the corners of asyncio most of us don’t look at until something breaks. We’ll unpack what really happens behind await, how to manage task lifecycles and what to do when your app just…stops. We’ll explore advanced patterns, sneaky bugs, and practical tools to help you trace and debug what your async code is actually doing.
Python is a very popular language in the server and in the desktop, but we can also enjoy its benefits in the embedded computing world. We will cover some unique challenges we're facing when implementing Python in an embedded product.
Using Python in embedded products brings all the goodness of the Python ecosystem into the development process, which results in faster development cycle and the use of the huge Python ecosystem. However, embedded computing devices are different in some respects from server or desktop applications. In this talk, I'll cover some of the challenges I was facing while developing a Python-based embedded device in the last years. Among them are: - What Python can and can't do in an embedded system - Cross compiling of Python and its modules - Controlling dependencies - Understanding system constraints - Micropython and it use cases
Want to build faster Python apps without ditching Python? This talk shares how we supercharged a Python Valkey/Redis client by combining the power of asyncio with native speed via FFI.
We’ll begin by discussing a common challenge in Python apps—how to achieve high performance, particularly for I/O-heavy or async workloads, without abandoning the ecosystem. Then, we’ll dive into how we built key parts of a Valkey client to take advantage of asyncio powerful concurrency model while using FFI to offload key performance-sensitive tasks to native code. We’ll show profiling before and after, walk through code samples, and share lessons on safely mixing Python with lower-level languages. Finally, we'll explore how these patterns can generalize to other async Python projects.
In this talk I present some lesser known gotchas and implicit behaviors of Foreign Keys in Django. We'll talk on what you need to pay attention to when defining FKs, how to change FKs w/o bringing your system to a halt and how to optimize for space,
Not many know this, but Foreign Keys in Django has a lot more then meets the eye! In this talk we'll build a small Django app together and tackle many issues related to Foreign Keys in the process. We'll talk about indexes, safe migrations, concurrency and performance. We'll also explore some of Django's implicit behaviors and discuss when and how we can do better!
After 4 years as a Tech Lead and Architect, my departure meant losing a key Python resource. In this talk, I’ll show how I used Python, AWS Claude, and Retrieval-Augmented Generation to build a digital clone of myself.
Leaving a role after serving as the primary technical reference for four intense years presented a significant challenge: how to transfer deep, nuanced Python knowledge without causing disruptions?
Conventional documentation lacked the dynamic interaction needed. This motivated me to build "Ben Bot," a sophisticated AI assistant created entirely using Python, AWS Claude, and the RAG technique.
In this talk, I'll walk attendees through my journey—from the initial idea and design considerations to integrating AWS Claude and overcoming Python-specific challenges during development. I will share real-world Python code examples, architectural choices made throughout the process, and practical tips for building AI-driven knowledge assistants.
Participants will leave with a clear understanding of how Python can enable the creation of interactive, intelligent chatbots to preserve institutional knowledge, ensuring smooth transitions when key team members move on.
You set out to ship features, now you’re stuck in utils.py wrangling edge cases. Glue code and redundant helpers bloat your project and lay out bait for bugs. With the right mindset and tools, learn how to focus on logic, not band-aids.
We’ve all been there: you’re building out a feature, something’s missing, and a copy-pasted snippet from GPT or Stack Overflow fits just right. It works, you move on, but those little helpers have a habit of quietly bloating your project, tiring reviewers, and hiding tiny bugs that only show up at the worst time.
We’ll look at how to replace duct-tapey micrologics with well-designed (and tested) tools, so your project isn’t just cleaner, but also easier to understand, debug, and extend. You’ll learn practical ways to cut boilerplate, write more expressive code, and make your logic shine.
Developers often face the dilemma of optimal vs practical solutions for complex challenges. This talk explores when a "good enough" heuristic approach is more efficient than pursuing perfect solutions, evaluating resource trade-offs to decide wisely.
Is finding the absolute perfect solution always the goal?
As Python developers, we often run into complex problems, where trying to achieve the ideal outcome can be incredibly time-consuming and resource-intensive. Think about a simple, yet complicated task, like efficiently matching items from two lists when you have specific constraints. This kind of challenge appears in various domains, like managing cloud infrastructure, handling security vulnerabilities, selecting appropriate AI models, and even just distributing tasks within a development team. Often, a "good enough" solution can be surprisingly effective and much more efficient in the real world.
During this presentation, I'll take you through a practical case study of a challenging pairing problem that you can all relate to. I'll demonstrate, using Python code, how a smart, weight-based heuristic method can lead to significant savings in runtime and system resources while still providing high-quality results across all the pairings.
By the end of this talk, you'll gain a clearer perspective on how to evaluate the trade-offs between striving for full optimization and embracing practical efficiency. Join me to understand when a "good enough" solution isn't just a compromise, but the wiser and more effective path forward!
When should I use decorators? Are type hints needed? Why care about PEP8? How can AI help? This session is for beginner Python enthusiasts who have mastered some of what this great language offers but still need pointers on how to use each tool.
Python has some really great traits, and contrary to the Zen of Python, there are usually many ways to get things done. In this session, we will discuss best practices that contribute to making code more readable and easier to maintain. You might already know how to use decorators, but when should you actually use them? Documentation is a good practice, but should you use docstrings, type hints, or one-liner comments? AI is great at proposing code, but what should you be paying attention to before implementing its suggestions? These are the types of questions we will explore together. I have been using Python for more than 6 years, and this talk is based on a workshop I have delivered to different teams in professional settings.
Python might not have a compilation step, but shipping a service still requires some build process (even if only to copy code somewhere). Let’s explore problems that arise in this situation, and how using a build system like Bazel can solve them.
Shipping a Python service can be as simple as cloning a repository and running a command. But as projects and teams grow, bespoke build processes start to form: README files with instructions on how to install external dependencies for different operating systems, scripts for building native extensions, and custom logic for choosing which tests to execute when running CI pipelines.
This leads to a poor developer experience: brittle environments, long CI cycles, slow builds, and huge container images (when using Docker).
In this talk, we’ll explore how a build system like Bazel approaches these problems and show how a minimal monorepo-style build configuration can be fast, efficient, and still remain developer-friendly.
Bad developer experience isn’t a given - it improves when developers, junior and senior alike, take charge. With a clear understanding of when Bazel can be a boon for Python projects, you’ll know if it’s the right fit for your project and how to approach integrating Bazel for reduced CI times, image sizes, and more hermetic builds.
Learn how to fine-tune small language models efficiently using modern Python tools like Axolotl. A practical, GPU- conscious guide to customizing LLMs with QLoRA, chat templates, dataset chunking, and cloud-friendly workflows.
This 20-minute light talk walks through a real-world fine-tuning pipeline built entirely in Python. You'll learn how to structure and run scalable fine-tuning jobs, even on limited hardware like Colab or cloud GPU services like RunPod. Topics include: • Why full fine-tuning is dead: a quick look at parameter-efficient approaches (like QLoRA) • How Axolotl simplifies model loading, LoRA injection, and dataset prep • Managing training across large datasets using chunked fine-tuning • Moving beyond Colab: when and how to scale to multi-GPU training with DeepSpeed • Performing inference on your fine-tuned model with minimal setup No prior ML experience needed — just some Python familiarity and curiosity about LLMs.
Tired of slow web apps with big data? Learn how Python & WebAssembly dramatically speed up browser data tasks. See a live demo: processing a large CSV client-side with Python (via Pyodide) & Pandas/NumPy. Witness a performance comparison and more.
Discover a game-changing approach to web performance! Learn how to leverage Python (compiled to WebAssembly) to process large datasets directly in the browser with incredible speed. We'll walk through a practical use case involving CSV data and compare the performance gains against standard JavaScript implementations. Get ready to unlock new possibilities for data-intensive web applications.
I tried to build a WhatsApp agent to find my friends’ buried recommendations and tips - and failed spectacularly. But I learned more than I expected. This talk shares the journey, the tools, and why side projects are the best way to grow.
I was frustrated. All I wanted was to find my friends’ trusted recommendation for where to travel with the kids next weekend – buried somewhere in months of casual chatter in our local WhatsApp group. Google didn’t help, ChatGPT didn’t know, and re-asking the group felt silly. I needed something smarter – an agent that could surface what my people had already shared, no matter when or how casually they’d mentioned it. That simple desire turned into a late-night obsession – a personal Python project that blended everything I knew about data science with the messy, unfamiliar world I was eager to explore: backend logic, interfaces, system design and bending tools until they (mostly) did what I needed. Because let’s face it, it’s never just about embeddings and clever semantic search algorithms, right? In trying to build the perfect WhatsApp agent, I discovered something even more valuable: how passion projects can surprise us, stretch us, and quietly reshape what we think we’re capable of. In this talk, I’ll share my personal project journey – what I built, what broke, what it taught me, and why sometimes failure is the best teacher. You’ll leave with practical tools and fresh inspiration to start your own side project, the one born from your everyday frustration and can solve a real problem you care about.
Today, the key skill isn’t mastering every line of code - it’s keeping up. This talk shows how understanding core concepts, using AI tools, and writing effective prompts can accelerate learning and development in a fast-moving AI landscape.
In today's fast-evolving AI landscape, one of the biggest challenges isn't just learning what to build—but how to learn to build. In this talk, we'll share our journey of learning how to learn in the world of AI, focusing on understanding the right concepts before jumping into implementation.
We’ll explore how focusing on learning theory and concepts, combining using AI tools and a few good prompts - can help developers navigate the growing AI ecosystem more effectively.
Using Agents as our main use case, we'll walk through how we took an early prototype written in a simple notebook and scaled it into a production-grade code, based on LangChain’s LangGraph framework, wrapping it all up with a ready-made UI using Streamlit – all done fast and simple using Cursor.
Whether you're just starting your AI journey or trying to bring structure to your experimental projects, this talk will give you a clear view of the critical skills and concepts that can help you scale your ideas—with agents as a practical and exciting example.
Using simple, “old-school” logging, I recorded my dishwasher’s energy and water use, then leveraged Python and pandas to clean, analyze, and visualize real-world data. A beginner-friendly dive into experiment design and data analysis. Wash, Dry, Analyze: Turning Dishwasher Logs into Clean Data
Abstract: I set up a controlled experiment on my dishwasher to uncover what’s really happening with energy and water use—because designing experiments is half the fun, and Python makes the rest a breeze. In this session I’ll show how I:
Designed test cycles and integrated power and flow sensors
Used pandas to import, clean, and flag anomalies in CSV logs
Applied descriptive stats (mean, median, outliers) to evaluate energy, water, and cost
Created clear, reproducible visualizations with matplotlib
The dishwasher was just an excuse to dive into pandas, and this talk is perfect for beginners eager to start their own data adventures.
This talk will show how to set up a private LLM + RAG system using Python in an "air-gaped" environment. We’ll cover choosing efficient open-source models, setting up local vector databases, and optimizing retrieval in resource-limited environments.
When our team wanted to use LLMs with RAG, we quickly hit a wall—sending sensitive data to the cloud wasn’t an option. Whether it's business secrets, medical records, or legal documents, some data simply can’t leave a secure network. So, we had to build our own private AI pipeline.
In this talk, I’ll share how we set up a fully private LLM + RAG system using Python. We’ll dive into choosing efficient open-source models, setting up local vector databases, and making retrieval work in a resource-limited environment. Along the way, we’ll discuss trade-offs, optimizations, and how to squeeze the most out of smaller models without sacrificing too much intelligence.
By the end, you’ll have a clear road map for building your own secure AI pipeline—no cloud required!
Embeddings power AI tools like search and chatbots — but what are they really? This talk explains embeddings in simple terms using Python, with real examples, humor, and no ML background required.
Embeddings are behind the magic of modern AI — powering search, recommendations, and those eerily accurate chatbots. But what are they, really? In this talk, Liza — a regular software engineer, not a data science PhD — breaks it down in plain English using real examples, bad charts, and trusty science-y Python tools. If you’ve ever wondered how words, products, or even bananas become vectors in high-dimensional space, this is your crash course.
Schedule to copy
Hosted by Cinema City
* Start time: 09:00
Language: English
Length: 45 min
AI agents can plan, act, and adapt, or at least, they appear to. Under the hood, they remain fragile systems built on prediction rather than true understanding. This talk explores what LLM agents are really capable of today, where they break, and how to design around their limits. Through real-world failure patterns and success stories, we’ll unpack the myths, highlight common traps, and share practical tools and evaluation techniques for building agents that are genuinely useful. Finally, we’ll look ahead to how these systems may evolve, and what you can do right now to stay ahead as agent-powered development becomes the norm.
Language: English
Length: 20 min
Speed up your pipelines by doing less! We’ll explore memory efficient caching, filtering, and take a deep dive into the often-overlooked Bloom filter — with practical examples to avoid unnecessary IO and computation.
Python has become powerful, but data pipelines, workloads, and API servers often suffer from unnecessary IO and redundant computation that slow things down.
In this talk, we’ll explore two essential techniques to speed up workloads by doing less: caching and filtering—and how to implement them efficiently, with relatively low memory overhead, along with real-world use cases where these techniques made an impact.
We’ll also take a closer look at an often overlooked and misunderstood tool: the Bloom filter. You’ll learn how it works, when it’s useful (and when it’s not), and how it helps you maintain a low memory footprint while effectively avoiding unnecessary database queries, API calls, or heavy computation—before they even happen.
Whether you're building data pipelines, APIs, or wrangling large datasets, this talk will give you practical insights and Pythonic tools to write smarter, faster, and more memory-conscious code.
Language: Hebrew
Length: 20 min
Explore advanced enum usage in Python: define methods, attach metadata, and create enums dynamically. Learn the benefits and pitfalls of the enum singleton to write more expressive and maintainable code.
While working on the open-source hdate library (which I maintain) and on internal tools at Intel, I came across some lesser-known but powerful uses of Python enums. From attaching extra data and methods to make code more expressive, to dynamically generating enums from configuration files, these patterns can add clarity to your code if used with care. In this talk, I’ll share these techniques, their advantages, and the subtle pitfalls to watch out for when using a singleton data structure.
Language: English
Length: 20 min
Code reloading is fun! Python's semantics for def, class complicate it. There are multiple approaches and multiple libraries; I don't think any is one-size-fits-all. This talk will teach you how they work to make informed choice for your code.
Edit->reload cycle on running software may be more fun & productive than exiting and losing state. Python's semantics for class and defcreating a new object are not as friendly to reloading as Smalltalk/Lisp/Ruby which patch in-place. Reloading is still very possible, with multiple approaches and libraries but I believe you better understand the issues and implementation tradeoffs. - What importlib.reload() does and does not. - Copied references: from ... import ..., instances, callbacks & closures, etc. - => Recording what-imported-what dependency graph. - => Patching classes/functions in-place vs. Updating references? Limitations. - A secret weapon: gc.get_referrers() - What IPython's %autoreload, jurigged, limeade do? - Renames/deletions. Problem of intent. => jurigged AST diffing?! - Top-level code, singletons, derived values. => Hard. Idempotent try: except NameError: style.
Language: English
Length: 20 min
Decorators are one of the most powerful and expressive features in Python, yet they can be confusing for many developers. In this talk, we’ll demystify decorators by exploring how they work, how to write them, and how to use them effectively.
This talk is designed for intermediate Python developers who want to take their skills to the next level by mastering one of Python’s most versatile features: decorators.
We’ll begin by revisiting the concept of functions as first-class objects, which sets the foundation for understanding how decorators operate. From there, we’ll walk through writing simple decorators, then progress to more advanced topics such as: + Decorators with arguments + Stacking multiple decorators + Class-based decorators + Decorators applied to methods and classes
Real-world examples will highlight how decorators are used in real-world applications, such as web development, logging, access control, and benchmarking.
Attendees will leave this talk with a clear mental model of decorators, the ability to confidently implement them, and inspiration to apply them in their own projects. This talk will also provide best practices and tips for keeping decorator logic clean, testable, and maintainable.
This talk can be delivered in Hebrew (my mother tongue) and English.
Language: English
Length: 20 min
Servers crash, containers restart, and services fail. This talk introduces Durable Python: a way to make workflows survive infrastructure failures, crucial for distributed systems, using AST tricks and durable execution platforms.
Modern Python workflows are more distributed than ever: orchestrating APIs, cloud services, microservices, and databases across environments where infrastructure is not always reliable. When a server crashes, a container restarts, or a service call fails mid-process, traditional Python scripts often must restart from scratch, risking duplicated work, data inconsistencies, or lost progress.
Durable Python changes this. It introduces a model where workflow state is preserved, and execution can automatically resume from the point of failure, without manual recovery, complex retry logic, or redundant operations. This talk will cover: Why infrastructure failures are inevitable — and why Python needs built-in durability to handle them.
The core principles of durable execution: state persistence, fault recovery, and reliable orchestration.
Practical examples and patterns for introducing durability into real-world Python automations, including CI pipelines, DevOps processes, microservice orchestration, and long-running AI agents.
How Durable Python works under the hood — leveraging durable execution platforms, transforming Python's AST, converting non-deterministic calls into trackable activities (like automatic checkpoints), and orchestrating everything seamlessly.
We’ll also live-demo a long-running Python workflow, simulate real infrastructure failures, and show the process resuming exactly where it left off — without re-executing completed steps or losing state.
Language: English
Length: 20 min
A new technique is presented where objects turn immutable upon first hash calculation. This approach enables convenient mutability up until the object is used as a dict-key or set-item, without compromising on safety beyond that point.
Python's hash-based collections rely on an implicit contract: once an object is used as a dictionary key or added to a set, its hash value (and thereby contents) should never change. Yet Python's design encourages mutable user-defined classes without enforcing immutability when hashing occurs - leading to subtle bugs when objects modify their state after being hashed.
This talk introduces the "Lazy-Freeze" pattern: a technique where objects automatically transition from mutable to immutable upon their first hash calculation. Unlike the traditional approach which requires immutable construction, Lazy-Freeze allows objects to begin their lifecycle with convenient mutability, then seamlessly lock their state when stability becomes critical.
We'll introduce an implementation mainly using Python's hash and setattr magic methods, discuss examples and performance implications, and explore some sharp corners.
Language: Hebrew
Length: 20 min
Explore Python’s groundbreaking shift beyond the Global Interpreter Lock (GIL). Understand the design tradeoffs, challenges, and performance impacts of running Python without the GIL.
The Global Interpreter Lock (GIL) has been a key part of Python's design, simplifying memory management but limiting parallelism in multi-threaded programs. Recent changes now allow Python to run without the GIL, unlocking true parallelism.
In this talk, we'll explore the implications of this shift: the internal changes to Python, the tradeoffs made for thread safety and performance, and the challenges overcome in the process. We’ll compare execution with and without the GIL through examples and benchmarks, and discuss the potential impact on Python developers and the ecosystem.
Whether you’re interested in Python’s internals, concurrency, or its evolving design, this session will provide a concise and practical overview of one of Python’s most significant updates.
Language: English
Length: 20 min
Securing application is often done using cryptography, but if you don't do it right, it may be broken and you wouldn't even know it. Learn the common cryptographic mistakes in Python and how to fix them using safe practices.
Implementing cryptography is like handling a loaded weapon — powerful, but dangerous in the wrong hands. In this talk, we’ll explore how to properly implement cryptography in Python, using real-world examples of code that led to serious security vulnerabilities. From insecure random number generation and broken key management to misusing cryptographic primitives and rolling your own protocols, we’ll walk through the most common (and often subtle) mistakes developers make. We’ll also cover the correct approaches using modern Python libraries. If you need to use cryptography in your code for secure communication, encrypting data at rest, or just curious to understand the meaning of the inputs to the cryptographic function, this session will equip you with the knowledge to do cryptography right — or at least know when to call in an expert.
Language: Hebrew
Length: 20 min
What if sound could drive movement? This talk introduces a Python-based program I developed that transforms audio into motion patterns - built for animatronics, optimized for embedded devices, and demonstrated through a raven named Samuel.
How do you get an animatronic raven to speak like a real one - complete with emotion, variation, and natural presence? In this talk, we'll explore how I built a Python-powered audio analysis system that brings Samuel, my interactive animatronic raven, to life. The goal wasn’t just to react to sound, but to create believable, nuanced motion driven by real audio data.
The challenge: to translate raven vocalizations into natural beak movements, without manually scripting the motion for each sound file. I needed a system that could analyze audio and generate movement instructions automatically - flexible, scalable, and Python-driven.
Using libraries like librosa, numpy, and matplotlib, I built a system that analyzes raven sounds, detects vocal energy peaks, and generates servo-ready movement instructions. To guide development, I created visual graphs of the audio - including RMS energy, volume, and frequency - to better understand how sound translated into motion. The program also generates multiple movement maps per sound clip, selecting one at random at runtime to avoid repetitive, robotic behavior.
We'll dive into: - Audio signal analysis with librosa (RMS, STFT, clustering). - Generating servo-ready binary movement patterns from raw audio. - Designing variability through randomized motion mapping. - Visualizing sound energy. - Controlling servos in real time using Python and Raspberry Pi.
Language: English
Length: 20 min
We all use asyncio, but not everyone truly understand what happens behind the await.
In this talk, we’ll dive into advanced patterns, real-world pitfalls, and debugging strategies for developers who want to move beyond the basics.
We all use asyncio - but let’s be real: how many of us actually understand what’s happening behind the scenes when we await something?
The moment you step off the happy path - into timeouts, cancellations, and juggling dozens of async tasks - things get a little chaotic, fast.
This talk is a deep dive into the corners of asyncio most of us don’t look at until something breaks. We’ll unpack what really happens behind await, how to manage task lifecycles and what to do when your app just…stops. We’ll explore advanced patterns, sneaky bugs, and practical tools to help you trace and debug what your async code is actually doing.
Language: Hebrew
Length: 20 min
Python is a very popular language in the server and in the desktop, but we can also enjoy its benefits in the embedded computing world. We will cover some unique challenges we're facing when implementing Python in an embedded product.
Using Python in embedded products brings all the goodness of the Python ecosystem into the development process, which results in faster development cycle and the use of the huge Python ecosystem. However, embedded computing devices are different in some respects from server or desktop applications. In this talk, I'll cover some of the challenges I was facing while developing a Python-based embedded device in the last years. Among them are: - What Python can and can't do in an embedded system - Cross compiling of Python and its modules - Controlling dependencies - Understanding system constraints - Micropython and it use cases
Language: Hebrew
Length: 20 min
Want to build faster Python apps without ditching Python? This talk shares how we supercharged a Python Valkey/Redis client by combining the power of asyncio with native speed via FFI.
We’ll begin by discussing a common challenge in Python apps—how to achieve high performance, particularly for I/O-heavy or async workloads, without abandoning the ecosystem. Then, we’ll dive into how we built key parts of a Valkey client to take advantage of asyncio powerful concurrency model while using FFI to offload key performance-sensitive tasks to native code. We’ll show profiling before and after, walk through code samples, and share lessons on safely mixing Python with lower-level languages. Finally, we'll explore how these patterns can generalize to other async Python projects.
Language: Hebrew
Length: 20 min
In this talk I present some lesser known gotchas and implicit behaviors of Foreign Keys in Django. We'll talk on what you need to pay attention to when defining FKs, how to change FKs w/o bringing your system to a halt and how to optimize for space,
Not many know this, but Foreign Keys in Django has a lot more then meets the eye! In this talk we'll build a small Django app together and tackle many issues related to Foreign Keys in the process. We'll talk about indexes, safe migrations, concurrency and performance. We'll also explore some of Django's implicit behaviors and discuss when and how we can do better!
Language: Hebrew
Length: 10 min
After 4 years as a Tech Lead and Architect, my departure meant losing a key Python resource. In this talk, I’ll show how I used Python, AWS Claude, and Retrieval-Augmented Generation to build a digital clone of myself.
Leaving a role after serving as the primary technical reference for four intense years presented a significant challenge: how to transfer deep, nuanced Python knowledge without causing disruptions?
Conventional documentation lacked the dynamic interaction needed. This motivated me to build "Ben Bot," a sophisticated AI assistant created entirely using Python, AWS Claude, and the RAG technique.
In this talk, I'll walk attendees through my journey—from the initial idea and design considerations to integrating AWS Claude and overcoming Python-specific challenges during development. I will share real-world Python code examples, architectural choices made throughout the process, and practical tips for building AI-driven knowledge assistants.
Participants will leave with a clear understanding of how Python can enable the creation of interactive, intelligent chatbots to preserve institutional knowledge, ensuring smooth transitions when key team members move on.
Language: Hebrew
Length: 10 min
You set out to ship features, now you’re stuck in utils.py wrangling edge cases. Glue code and redundant helpers bloat your project and lay out bait for bugs. With the right mindset and tools, learn how to focus on logic, not band-aids.
We’ve all been there: you’re building out a feature, something’s missing, and a copy-pasted snippet from GPT or Stack Overflow fits just right. It works, you move on, but those little helpers have a habit of quietly bloating your project, tiring reviewers, and hiding tiny bugs that only show up at the worst time.
We’ll look at how to replace duct-tapey micrologics with well-designed (and tested) tools, so your project isn’t just cleaner, but also easier to understand, debug, and extend. You’ll learn practical ways to cut boilerplate, write more expressive code, and make your logic shine.
Language: Hebrew
Length: 10 min
Developers often face the dilemma of optimal vs practical solutions for complex challenges. This talk explores when a "good enough" heuristic approach is more efficient than pursuing perfect solutions, evaluating resource trade-offs to decide wisely.
Is finding the absolute perfect solution always the goal?
As Python developers, we often run into complex problems, where trying to achieve the ideal outcome can be incredibly time-consuming and resource-intensive. Think about a simple, yet complicated task, like efficiently matching items from two lists when you have specific constraints. This kind of challenge appears in various domains, like managing cloud infrastructure, handling security vulnerabilities, selecting appropriate AI models, and even just distributing tasks within a development team. Often, a "good enough" solution can be surprisingly effective and much more efficient in the real world.
During this presentation, I'll take you through a practical case study of a challenging pairing problem that you can all relate to. I'll demonstrate, using Python code, how a smart, weight-based heuristic method can lead to significant savings in runtime and system resources while still providing high-quality results across all the pairings.
By the end of this talk, you'll gain a clearer perspective on how to evaluate the trade-offs between striving for full optimization and embracing practical efficiency. Join me to understand when a "good enough" solution isn't just a compromise, but the wiser and more effective path forward!
Language: Hebrew
Length: 10 min
When should I use decorators? Are type hints needed? Why care about PEP8? How can AI help? This session is for beginner Python enthusiasts who have mastered some of what this great language offers but still need pointers on how to use each tool.
Python has some really great traits, and contrary to the Zen of Python, there are usually many ways to get things done. In this session, we will discuss best practices that contribute to making code more readable and easier to maintain. You might already know how to use decorators, but when should you actually use them? Documentation is a good practice, but should you use docstrings, type hints, or one-liner comments? AI is great at proposing code, but what should you be paying attention to before implementing its suggestions? These are the types of questions we will explore together. I have been using Python for more than 6 years, and this talk is based on a workshop I have delivered to different teams in professional settings.
Language: Hebrew
Length: 10 min
Python might not have a compilation step, but shipping a service still requires some build process (even if only to copy code somewhere). Let’s explore problems that arise in this situation, and how using a build system like Bazel can solve them.
Shipping a Python service can be as simple as cloning a repository and running a command. But as projects and teams grow, bespoke build processes start to form: README files with instructions on how to install external dependencies for different operating systems, scripts for building native extensions, and custom logic for choosing which tests to execute when running CI pipelines.
This leads to a poor developer experience: brittle environments, long CI cycles, slow builds, and huge container images (when using Docker).
In this talk, we’ll explore how a build system like Bazel approaches these problems and show how a minimal monorepo-style build configuration can be fast, efficient, and still remain developer-friendly.
Bad developer experience isn’t a given - it improves when developers, junior and senior alike, take charge. With a clear understanding of when Bazel can be a boon for Python projects, you’ll know if it’s the right fit for your project and how to approach integrating Bazel for reduced CI times, image sizes, and more hermetic builds.
Language: English
Length: 20 min
Learn how to fine-tune small language models efficiently using modern Python tools like Axolotl. A practical, GPU- conscious guide to customizing LLMs with QLoRA, chat templates, dataset chunking, and cloud-friendly workflows.
This 20-minute light talk walks through a real-world fine-tuning pipeline built entirely in Python. You'll learn how to structure and run scalable fine-tuning jobs, even on limited hardware like Colab or cloud GPU services like RunPod. Topics include: • Why full fine-tuning is dead: a quick look at parameter-efficient approaches (like QLoRA) • How Axolotl simplifies model loading, LoRA injection, and dataset prep • Managing training across large datasets using chunked fine-tuning • Moving beyond Colab: when and how to scale to multi-GPU training with DeepSpeed • Performing inference on your fine-tuned model with minimal setup No prior ML experience needed — just some Python familiarity and curiosity about LLMs.
Language: Hebrew
Length: 20 min
Tired of slow web apps with big data? Learn how Python & WebAssembly dramatically speed up browser data tasks. See a live demo: processing a large CSV client-side with Python (via Pyodide) & Pandas/NumPy. Witness a performance comparison and more.
Discover a game-changing approach to web performance! Learn how to leverage Python (compiled to WebAssembly) to process large datasets directly in the browser with incredible speed. We'll walk through a practical use case involving CSV data and compare the performance gains against standard JavaScript implementations. Get ready to unlock new possibilities for data-intensive web applications.
Language: Hebrew
Length: 20 min
I tried to build a WhatsApp agent to find my friends’ buried recommendations and tips - and failed spectacularly. But I learned more than I expected. This talk shares the journey, the tools, and why side projects are the best way to grow.
I was frustrated. All I wanted was to find my friends’ trusted recommendation for where to travel with the kids next weekend – buried somewhere in months of casual chatter in our local WhatsApp group. Google didn’t help, ChatGPT didn’t know, and re-asking the group felt silly. I needed something smarter – an agent that could surface what my people had already shared, no matter when or how casually they’d mentioned it. That simple desire turned into a late-night obsession – a personal Python project that blended everything I knew about data science with the messy, unfamiliar world I was eager to explore: backend logic, interfaces, system design and bending tools until they (mostly) did what I needed. Because let’s face it, it’s never just about embeddings and clever semantic search algorithms, right? In trying to build the perfect WhatsApp agent, I discovered something even more valuable: how passion projects can surprise us, stretch us, and quietly reshape what we think we’re capable of. In this talk, I’ll share my personal project journey – what I built, what broke, what it taught me, and why sometimes failure is the best teacher. You’ll leave with practical tools and fresh inspiration to start your own side project, the one born from your everyday frustration and can solve a real problem you care about.
Language: Hebrew
Length: 20 min
Today, the key skill isn’t mastering every line of code - it’s keeping up. This talk shows how understanding core concepts, using AI tools, and writing effective prompts can accelerate learning and development in a fast-moving AI landscape.
In today's fast-evolving AI landscape, one of the biggest challenges isn't just learning what to build—but how to learn to build. In this talk, we'll share our journey of learning how to learn in the world of AI, focusing on understanding the right concepts before jumping into implementation.
We’ll explore how focusing on learning theory and concepts, combining using AI tools and a few good prompts - can help developers navigate the growing AI ecosystem more effectively.
Using Agents as our main use case, we'll walk through how we took an early prototype written in a simple notebook and scaled it into a production-grade code, based on LangChain’s LangGraph framework, wrapping it all up with a ready-made UI using Streamlit – all done fast and simple using Cursor.
Whether you're just starting your AI journey or trying to bring structure to your experimental projects, this talk will give you a clear view of the critical skills and concepts that can help you scale your ideas—with agents as a practical and exciting example.
Language: English
Length: 20 min
Using simple, “old-school” logging, I recorded my dishwasher’s energy and water use, then leveraged Python and pandas to clean, analyze, and visualize real-world data. A beginner-friendly dive into experiment design and data analysis. Wash, Dry, Analyze: Turning Dishwasher Logs into Clean Data
Abstract: I set up a controlled experiment on my dishwasher to uncover what’s really happening with energy and water use—because designing experiments is half the fun, and Python makes the rest a breeze. In this session I’ll show how I:
Designed test cycles and integrated power and flow sensors
Used pandas to import, clean, and flag anomalies in CSV logs
Applied descriptive stats (mean, median, outliers) to evaluate energy, water, and cost
Created clear, reproducible visualizations with matplotlib
The dishwasher was just an excuse to dive into pandas, and this talk is perfect for beginners eager to start their own data adventures.
Language: Hebrew
Length: 20 min
This talk will show how to set up a private LLM + RAG system using Python in an "air-gaped" environment. We’ll cover choosing efficient open-source models, setting up local vector databases, and optimizing retrieval in resource-limited environments.
When our team wanted to use LLMs with RAG, we quickly hit a wall—sending sensitive data to the cloud wasn’t an option. Whether it's business secrets, medical records, or legal documents, some data simply can’t leave a secure network. So, we had to build our own private AI pipeline.
In this talk, I’ll share how we set up a fully private LLM + RAG system using Python. We’ll dive into choosing efficient open-source models, setting up local vector databases, and making retrieval work in a resource-limited environment. Along the way, we’ll discuss trade-offs, optimizations, and how to squeeze the most out of smaller models without sacrificing too much intelligence.
By the end, you’ll have a clear road map for building your own secure AI pipeline—no cloud required!
Language: English
Length: 20 min
Embeddings power AI tools like search and chatbots — but what are they really? This talk explains embeddings in simple terms using Python, with real examples, humor, and no ML background required.
Embeddings are behind the magic of modern AI — powering search, recommendations, and those eerily accurate chatbots. But what are they, really? In this talk, Liza — a regular software engineer, not a data science PhD — breaks it down in plain English using real examples, bad charts, and trusty science-y Python tools. If you’ve ever wondered how words, products, or even bananas become vectors in high-dimensional space, this is your crash course.