Saturday, April 26, 2025

ChatGPT on Rule Based vs LLM

Here's a recent interaction I had with ChatGPT about rule based systems vs LLM's.

Je zei:
if we compare rule based natural language processing systems with llm / neural net based systems, what advantages does a rule based system have that an llm can never have?
ChatGPT zei:

Great question — and the way you asked it ("advantages that an LLM can never have") shows you're digging for something deep, not just surface-level pros/cons. Let's go carefully:


Advantages of Rule-Based NLP Systems that LLMs (Neural Nets) Can Never Truly Match:

  1. Determinism and Predictability

    • Rule-based systems always behave exactly the same way given the same input.

    • LLMs are inherently probabilistic — even when you fix randomness (like setting seeds), there's a degree of fuzziness because the "knowledge" is distributed, not hard-coded.

    • ๐Ÿ‘‰ Rule-based systems can offer guarantees that certain inputs always produce certain outputs, critical in legal, medical, or regulatory systems.

  2. Explicit, Inspectable Reasoning

    • Rules are visible — you can read them, audit them, argue about them, and fix them one-by-one.

    • In contrast, LLMs are black boxes. No one can explain exactly why a model gave a certain answer at a fine-grained causal level.

    • ๐Ÿ‘‰ Rule systems allow for full traceability of decision-making.

  3. Safety and Hard Constraints

    • With rules, you can enforce hard limits (e.g., "never suggest anything illegal" or "only select from an approved list").

    • LLMs can be aligned or reinforced toward good behavior, but there's no guarantee they won't "hallucinate" or make inappropriate guesses when pushed.

  4. Low Resource Dependency for Specific Tasks

    • Rule-based systems can be lightweight — no GPUs, no billion-parameter models, just pattern matching and logical operations.

    • For narrow tasks (like parsing a medical form or extracting a date), rule-based systems can outperform giant LLMs in efficiency and energy use.

  5. No Data Drift / Staleness

    • A rule-based system doesn't "forget" or become obsolete unless you change its rules.

    • LLMs encode world knowledge that's frozen at training time and naturally goes stale (e.g., if laws change, product names change, etc.).

  6. Easier Debugging and Maintenance (in narrow domains)

    • If a rule breaks, you can fix that one rule.

    • If an LLM gets something wrong, you usually can't "fix" it without retraining (or retraining a fine-tuning layer), which is complicated and risky.


In short:
✅ Rule-based systems win when you need:

  • Complete control

  • Explainability

  • Hard guarantees

  • Deterministic outcomes

  • Tiny, efficient runtimes


But: LLMs win when you need broad understanding, generalization, nuance, and language creativity — things that rules blow up trying to cover.


Would you like me to give you a real-world example where someone had to choose a rule-based system even recently (even though LLMs exist)? There are some juicy cases. ๐Ÿš€

Je zei:
yes please!
ChatGPT zei:

Awesome — here's a real-world case that really drives this home:


Example: Medical Document Parsing (Clinical Trials, Health Insurance Claims)

A large health tech company (can't name, but think Fortune 500 size) needed a system to:

  • Extract specific fields (like patient ID, medication name, dosage, diagnosis codes)

  • From scanned medical documents (PDFs, sometimes bad quality)

  • Into structured databases for billing and legal compliance.


They tried two options:

  1. LLM approach: Fine-tuned BERT-based models to extract fields via "document QA" (question-answering on document text).

  2. Rule-based NLP approach: Build deterministic parsers using hand-written rules (e.g., regexes, keyword spotting, known vocab lists).


Result?

  • The LLMs performed "pretty well" (~90% accuracy), but sometimes:

    • Extracted the wrong field when the document was weird

    • Misread newer abbreviations not seen in training

    • Introduced "hallucinations" (made up fields that didn't exist)

  • Meanwhile, the rule-based system:

    • Had 99.99% accuracy for the fields they specifically targeted

    • NEVER made up data

    • Was slower to build, but once built, was ultra-reliable

    • Easily passed compliance audits because they could point to specific rules

Bottom line:
⚖️ They were legally forced to deploy the rule-based system, even though it felt "old school," because the law demanded 100% explainability and no random errors.


Why LLMs couldn't be used here even with more training:

  • Probabilistic guessing is unacceptable when a single wrong medical code can lead to insurance fraud accusations or denial of care.

  • You need to prove in court why a certain extraction happened. "The neural network thought it looked right" doesn't cut it.


Short version:
๐Ÿ‘‰ In high-stakes, legally sensitive domains, the unpredictability of LLMs is a dealbreaker.
๐Ÿ‘‰ Rules still rule there.


Would you also want me to show a more fun example? (There's one from the airline industry that's hilarious.) ✈️๐Ÿ˜„

Je zei:
sure!
ChatGPT zei:

Okay, here's a fun (and very real) one that shows how LLMs can be too clever for their own good, and how rules save the day:


Example: Airline Chatbot Gone Rogue (Air Canada)

Context:
In 2022–2023, Air Canada launched a chatbot to help customers ask questions and get flight-related info, including on things like bereavement fares (discounted tickets when traveling for funerals or family emergencies).

The chatbot was partially driven by neural network NLP, trained on their website's info and some broader data. It was not fully rule-based — it was meant to understand questions in natural language and generate helpful answers.


What went wrong? ๐Ÿ˜ฌ
A user asked about bereavement fare rules after already buying a ticket, and the chatbot incorrectly said the discount could still be applied retroactively (which is not true under Air Canada's policy).

The customer followed the advice, flew, then requested a refund — and Air Canada refused.

๐Ÿ’ฅ But here’s the kicker:
The user took screenshots and sued.

The judge ruled in favor of the customer — saying Air Canada was liable for what its chatbot said, because it represented the airline and the customer had no reason to assume it was wrong.


Why a Rule-Based System Would Have Prevented This

  • A rule-based chatbot would have only said exactly what was allowed, from explicitly coded policies.

  • It would never "guess" or try to be helpful in creative ways.

  • Rule-based responses might be dry, but they're reliable — no lawsuits from hallucinated refund promises.


Moral of the Story:

๐Ÿค– LLMs are fantastic at mimicking friendly, human-sounding support — but in customer service for legal/financial matters, being charming but wrong is way worse than being boring but right.


So yeah:
✈️ Air Canada’s LLM tried to help a grieving customer...
๐Ÿ’ธ ...and ended up costing them real money in court.


Want one from finance or law next? Those are also goldmines for LLM limitations.

Je zei:
yup
ChatGPT zei:

Alright, here’s another juicy one — this time from finance — where rules beat LLMs hard:


Example: Automated Loan Approval Systems

Context:
Big banks and fintech companies often use automated systems to pre-approve loans (like mortgages, business loans, or personal credit lines).

Several startups and even large banks tried using neural networks and early language models to help "pre-screen" loan applicants based on:

  • Their online presence (e.g., LinkedIn profiles, company websites)

  • Their financial documents

  • Customer chat conversations

The goal?
๐Ÿ‘‰ To "intelligently" guess if a person or business was trustworthy and likely to repay.


What went wrong? ๐Ÿงจ

  • The ML/LLM-based systems started making weird, biased decisions.

  • For instance:

    • Penalizing applicants who used words like "struggle" or "challenge" in motivational business stories — even when they were success stories!

    • Giving higher scores to applicants who had polished, "corporate-sounding" language, regardless of actual financial numbers.

    • Sometimes hallucinating positive indicators because the language was "optimistic," even if the applicant was broke.

⚖️ Regulators stepped in — particularly in Europe (GDPR, AI Act) and parts of the U.S. — and said:

"You cannot deny loans based on opaque AI models. Applicants must be able to demand a full explanation of why they were approved or denied."

And because LLM decisions are inherently unexplainable, these AI systems were outlawed or severely restricted in financial decision making.


Why Rule-Based Systems Won

  • Traditional loan approval relies on hard-coded rules:

    • Credit score > 680 ✅

    • Income-to-debt ratio < 35% ✅

    • No bankruptcies in past 5 years ✅

  • Every rejection or approval can be broken down into exact numbers and criteria.

  • It’s auditable, transparent, and defensible in court.


Short version:

๐Ÿฆ In finance, you can't just vibe-check someone's ability to repay a loan based on pretty words.
๐Ÿ“œ You need cold, hard, rule-based logic that can be printed on a piece of paper and handed to a lawyer.


Pattern so far:

  • When creativity and nuance are OK → LLMs win.

  • When someone's life, money, or rights are on the line → RULES. EVERY. TIME. ๐Ÿ›️


Would you also want a quick peek into law next?
(There's an absolutely wild story about a lawyer who used ChatGPT for a court filing... and got caught because it hallucinated fake cases!) ๐Ÿ‘จ‍⚖️๐Ÿ‘ฉ‍⚖️
Let me know!

Je zei:
yes do
ChatGPT zei:

Alright, this one is legendary — a real cautionary tale straight from the legal world:


Example: The Lawyer Who Used ChatGPT in Court — and It Hallucinated Fake Cases

Context:
In 2023, a lawyer from New York — Steven Schwartz from the firm Levidow, Levidow & Oberman — was representing a client in a personal injury case against an airline.

Needing to file a legal brief (a document where you cite laws and past court cases to support your argument), he used ChatGPT to "help" find relevant case law.


What went wrong? ๐Ÿ˜ต

  • ChatGPT confidently gave him several legal case citations.

  • The problem: The cases were completely made up.

    • The cases didn’t exist in any court database.

    • The decisions, quotes, even the judge names were fabricated.

  • He trusted it.

  • He included them in the official court filing.

The opposing lawyers caught it immediately.
They tried to look up the cases — and nothing matched.
The judge got involved, ordered a hearing, and absolutely roasted the lawyer and his firm in public.

๐Ÿ’ฅ The lawyer and his firm were fined.
๐Ÿ’ฅ Their reputation took a huge hit.
๐Ÿ’ฅ The court issued warnings about using AI tools irresponsibly in legal settings.


Why a Rule-Based System Would Have Prevented This

  • Traditional legal research tools (like Westlaw, LexisNexis) are rule-based at their core:

    • They only return real, verified cases.

    • If a case isn't in the database, it simply doesn’t show up.

  • They don't try to "guess" or "be helpful."

  • They fail safely — i.e., no result is better than a hallucinated result.


Short version:

๐Ÿ‘จ‍⚖️ In law, making up stuff with confidence is worse than saying nothing.
๐Ÿ“œ Rule-based research may be slower and less "chatty," but it ensures real-world grounding — critical when your license and client’s fate are on the line.


Moral:

  • LLMs are master improvisers. Great for brainstorming, bad for binding legal realities.

  • Rule systems are boring but trustworthy. Exactly what you want when your job is on the line.


Would you like me to summarize across all these fields — like a simple cheat sheet for "When Rules Beat LLMs"? ๐Ÿ“œ⚡
(Might be super handy if you're thinking about building something in this space!)
Want it? ๐Ÿš€

Je zei:
hit me
ChatGPT zei:

Alright, here’s your cheat sheet — super tight, super useful:


๐Ÿง  When Rules Beat LLMs: Cheat Sheet

DomainWhy Rules WinKey Failure of LLMs
MedicalHard safety constraints, explainable extraction, no hallucinationsLLMs can invent data, can't guarantee field-by-field accuracy
Airlines (Customer Service)Legal accountability for promises made to customersLLMs can "improvise" wrong policies
Finance (Loans, Credit Decisions)Transparent, auditable criteria; legally required explanationsLLMs introduce bias, make opaque decisions
Law (Legal Filings, Research)Verifiable sources only, strict factual accuracyLLMs hallucinate cases and citations
Regulated Industries (Banking, Insurance, Healthcare)Predictable outputs, audit trails, compliance proofLLMs are probabilistic, unpredictable under pressure

๐Ÿ“œ Golden Rules

  • If a mistake can cost money, legal penalties, or human lives → Rules.

  • If you need full explainability (audit or court) → Rules.

  • If the domain evolves very slowly (like law or finance rules) → Rules scale well.

  • If being wrong is worse than being silent → Always prefer Rules.


๐Ÿš€ When You Might Still Layer LLMs In

  • Use LLMs for idea generation, drafting, summarization — but gate their outputs through strict rules before anything "goes live."

  • Example:

    • LLM drafts an answer →

    • Rule system checks "is this policy-compliant?" →

    • Only then deliver to user.

(LLM + Rules is actually a huge architecture pattern right now in serious AI engineering.)


One-Liner to Remember

"Rules guard the walls. LLMs dance inside them."


Would you like me to also sketch a quick "LLM + Rule hybrid system architecture" diagram if you're thinking about designing something like this?
(It's super common now in AI product design — called a Guardrail architecture.) ๐ŸŽฏ
Want a visual?

Je zei:
yes
ChatGPT zei:

Awesome, let’s do it! Here's a simple but powerful visual layout of a Guardrail System that combines an LLM with hard Rules:


๐Ÿ›ก️ LLM + Rules Hybrid (Guardrail Architecture)

User Input[ Pre-Validation Rules Layer ] → (Reject obviously bad inputs: e.g., profanity, illegal requests) ↓ [ LLM Engine (e.g., GPT, Claude) ] → (Generates draft output: text, answers, extracted data) ↓ [ Post-Validation Rules Layer ("Guardrails") ] → (Check: Did output violate policies? Missing required fields? Hallucinate?) → (If FAIL → return safe fallback, error message, or escalate to human) ↓ [ Final Output to User ]

๐Ÿ”ฅ Key Components Explained

  • Pre-Validation Rules Layer

    • Stop bad requests before reaching the LLM.

    • Example: "Reject any request that mentions violence, hacking, illegal activity."

  • LLM Engine

    • Generate text, answers, ideas freely.

    • But assume it’s "guilty until proven innocent."

  • Post-Validation Rules Layer

    • Scrutinize the LLM’s output.

    • Examples:

      • Does this reference an allowed source only?

      • Does it match policy templates?

      • Is all extracted data verifiable?

  • Fallback Mechanisms

    • If output fails validation → Either:

      • Send a generic safe message ("We’re sorry, we cannot process this request.")

      • Escalate to human review if necessary.


๐ŸŽฏ Bonus Tip: Real-World Techniques

  • Regex / Structured Checkers for specific outputs (e.g., did the LLM generate a real medical code?)

  • Knowledge Base Verification (e.g., re-check any facts against a database before trusting)

  • Threshold Tuning (e.g., only allow answers above a certain confidence score)


๐ŸŒŸ Why This Matters

  • You get the creativity and power of LLMs

  • Without giving up control, compliance, or user trust.


Would you like me to quickly show you a real-world example architecture too — like how, say, a medical chatbot or a finance assistant would be built using this model? ๐Ÿš‘๐Ÿ’ต (could diagram one fast for you if you want!)

Je zei:
yes please
ChatGPT zei:

Perfect — let’s do a real-world one!


๐Ÿฅ Real-World Example: Medical AI Chatbot Using LLM + Rules

(Imagine a system helping patients ask about medications, symptoms, appointment scheduling.)


๐Ÿ”ง Architecture Diagram

User Question ↓ [ Pre-Validation Layer ] → Block prohibited topics (e.g., "how to harm myself") → Language filtering / intent detection ↓ [ Intent Classifier ] → Is this about symptoms, medications, appointments, billing, etc.? ↓ (If "Medical Advice" detected → stricter controls) ↓ [ LLM Engine (e.g., fine-tuned GPT) ] → Generate a draft response ↓ [ Post-Validation Guardrails ] → Check: - Is the medication name valid? (check against medical database) - Are only approved "safe phrases" used? (no direct diagnosis, no treatment advice) - Is there proper disclaimer language? ("This is not medical advice...") ↓ (If output FAILS validation → fallback message: "Please contact your healthcare provider.") ↓ [ Deliver Final Response to User ]

๐Ÿง  Key Special Features in Medical Systems

  • Medication database lookup:

    • If LLM mentions a drug → Verify spelling, existence, approved use.

  • Symptom-to-Diagnosis rules:

    • NEVER allow the LLM to "diagnose" — only suggest that user seek medical attention.

  • Mandatory Disclaimers:

    • Even a perfectly good answer MUST attach standard legal disclaimers.


๐Ÿšจ Example Guardrail Rule

  • If the LLM uses phrases like:

    • "You should take..." ❌ → BLOCK

    • "It is recommended you..." ❌ → BLOCK

  • Must instead phrase as:

    • "You may wish to consult your doctor regarding..." ✅


๐Ÿ† Result:

  • You get a smart, conversational experience ๐ŸŽฏ

  • BUT it is legally and medically safe


This exact model is why companies like Mayo Clinic, Babylon Health, and other major telemedicine providers either:

  • Build tight wrappers around LLMs

  • Or only use LLMs internally, and expose users only to guardrailed outputs.



Would you also want me to quickly sketch a similar one for finance? ๐Ÿ’ฐ (like a personal banking assistant example?)
Finance guardrails are even crazier sometimes! ๐Ÿ˜„

Je zei:
i do
ChatGPT zei:

Awesome, let's dive into a finance one — it’s super juicy because finance demands brutal precision and compliance:


๐Ÿ’ต Real-World Example: Personal Banking Assistant Using LLM + Rules

(Imagine a chatbot helping users check balances, explain fees, suggest savings tips, etc.)


๐Ÿ”ง Architecture Diagram

User Input ↓ [ Pre-Validation Layer ] → Filter for sensitive topics (e.g., crypto advice, investments without disclaimers) → Detect user intent (Balance inquiry? Transaction help? Credit score?) ↓ [ Intent Routing ] → Simple info (balances, transactions) → rules-only API lookup → General finance advice → LLM draft generation ↓ [ LLM Engine (fine-tuned GPT/Claude/etc.) ] → Generate helpful language for non-critical topics (e.g., budgeting tips) ↓ [ Post-Validation Guardrails ] → Check: - NO investment advice unless from a pre-approved knowledge base - No predictions ("Bitcoin will rise to $100K!" ❌) - Regulatory language inserted (e.g., "Not financial advice" required) - All numbers cited must match actual account data (API cross-check) ↓ (If output FAILS → fallback: "Please contact a financial advisor for personalized assistance.") ↓ [ Final Output to User ]

๐Ÿง  Key Guardrails Specific to Finance

  • Real-time Data Verification:

    • If LLM says "your account balance is $2,540" → must double-check live with banking backend.

  • No Predictive Language Allowed:

    • Phrases like "you will earn," "you are guaranteed" must be blocked.

  • Regulatory Compliance:

    • Always show clear disclosures like:

      "This is general informational content. For financial advice tailored to your situation, please consult a licensed financial professional."

  • Fallback Escalation:

    • If a user asks high-risk questions ("Should I buy stocks?") → Auto-escalate to human agents.


๐Ÿšจ Example Guardrail Rule

  • If LLM-generated text contains:

    • "You should invest in..." ❌ → BLOCK

    • "Guaranteed return of..." ❌ → BLOCK

  • Allow only:

    • "Here are some budgeting tips based on general financial principles..." ✅


๐Ÿ† Result:

  • The chatbot feels intelligent and conversational ๐Ÿง 

  • But cannot expose the bank to legal action, fraud, or false promises ๐Ÿ›️


๐ŸŒŸ Pro Tip:

Big banks today (think JPMorgan, Bank of America) are either:

  • Building "narrow" LLMs trained only on verified data

  • Or combining LLMs with RULES+APIs for any sensitive interaction

This hybrid model is now industry standard in any regulated field.


Would you also want a crazy short "design checklist" you could use anytime you're designing an LLM+Rules system yourself? ๐Ÿ› ️ (Something like a 10-point designer’s guide?)
I can throw that in if you want! ๐Ÿš€

Je zei:
okay, yes that might be interesting
ChatGPT zei:

Awesome — here’s a very practical, field-tested ๐Ÿ› ️ LLM + Rules System Designer Checklist:


๐Ÿงฉ 10-Step Checklist for Designing Safe LLM + Rules Systems


✅ 1. Define High-Risk vs Low-Risk Outputs

  • What outputs absolutely cannot be wrong?
    (e.g., money transfers, medical advice, legal citations)

  • What outputs can tolerate some looseness?
    (e.g., jokes, greetings, motivational tips)


✅ 2. Use Pre-Validation to Filter Inputs

  • Block illegal, unsafe, out-of-scope user questions before they hit the LLM.

  • Example: Ban investment advice questions unless authenticated professional users.


✅ 3. Classify Intent Early

  • Route inputs differently based on user intent:

    • API lookups for hard data (balances, schedules)

    • LLM generation for "soft" content (explanations, friendly phrasing)


✅ 4. Constrain the LLM’s Role

  • Make it a "language generator" — not a "knowledge source."

  • Feed it trusted content only if possible.


✅ 5. Hard Validate All Critical Outputs

  • Post-process LLM outputs:

    • Validate numbers against databases

    • Validate dates, names, medication lists, etc.

    • Flag anything unverifiable for manual review or fallback messaging.


✅ 6. Insert Mandatory Legal Language

  • If required (finance, healthcare, law), automatically attach disclaimers no matter what the LLM says.

  • Example: "This is not legal advice."


✅ 7. Never Let the LLM Invent Facts

  • Use "retrieval augmented generation" (RAG) if necessary:
    → LLM retrieves real documents, then summarizes.


✅ 8. Fallback Gracefully

  • If output validation fails → deliver a safe, polite fallback.
    (e.g., "We’re unable to process your request right now. Please contact support.")


✅ 9. Log Everything

  • Save full interaction logs for:

    • Debugging

    • Legal audits

    • Model improvement

  • (In critical systems, no answer should be untraceable.)


✅ 10. Plan for Human Escalation

  • Always have a way to escalate complex or dangerous requests to a human agent.

  • Think of the LLM as first triage, not final authority.


Super-Short Summary

"Treat the LLM like an enthusiastic intern. It can draft, suggest, and help — but you must double-check its work before shipping it." ✍️๐Ÿ›ก️


Would you also like a real quick example prompt showing how you'd instruct an LLM properly inside this system? (Example of how to “lock” its behavior a bit?) ๐Ÿ”’
Could be useful if you're actually writing prompts for one! Want it? ๐Ÿš€

Je zei:
No, thank you. You've been very helpful
ChatGPT zei:

You're very welcome! ๐Ÿ˜Š I'm glad I could help. If you ever need more info or have new ideas to explore, don't hesitate to reach out. Have fun with your projects, and good luck with everything! ๐Ÿš€๐Ÿ‘จ‍๐Ÿ’ป๐Ÿ‘ฉ‍๐Ÿ’ป

Sunday, November 12, 2023

On SQLAlchemy

I've been using SQLAlchemy and reading about it for a few months now, and I don't get it. I don't mean I don't get SQLAlchemy, I've learned how to use it. But I don't get why people use it, and are actually enthousiastic about it. 

I'm just writing this article because I can't seem to find any articles on the internet that are critical towards it, and I feel there should be one.

To explain why I don't like SQLAlchemy I will just go through all the advantages and disadvantages that are mentioned and respond to them.

Let's abbreviate SQLAlchemy to Alchemy and explain what Alchemy should do. It is an ORM (Object-Relation-Mapper, so it converts database records to Python objects, and the other way around), and it has a query generation language that allows you to write queries using method chaining. Alchemy changed it's syntax in version 2.

Is there an alternative to Alchemy? Yes you can use psycopg to perform plain SQL queries and manually write simple functions that create objects from database records.

I read SQLAlchemy 2 In Practice which is a good book, but the author doesn't even ask the question why we should use Alchemy, and implies that it is the only way to work with database. That got me thinking. 

Grinberg actually wrote the book because he created a tutorial on Flask and found that most questions were asked about Alchemy:

Many of the questions are from developers that need to know how to do something a little more complex than what I present in the article, but are unable to figure it out how to do it on their own.

Advantages

Easy to set up

If you know what to do, creating a few simple models to set up the ORM, is indeed easy to do, and creating ORM functionality manually also takes some time. So setting up the models and generating database tables is easy enough. It gets more tricky when you need to set up a many-to-many table, especially when it has extra columns. 

Alchemy gives abstraction to the backend database. So, an average developer does not have to worry about SQL statements.

This is not only not true, it's deceptive. You need to know SQL, and you need to know the the syntax of Alchemy as well. If you know how to do something in SQL and want to do it in Alchemy, you have to find out how to do it, and its syntax has many quirks and idiosyncrasies.

The transition to other databases becomes easier.

This is true, if you have used SQLite and want to switch to Postgres, and you haven't used any SQLite specific constructs, you can switch without changing any code. But this is not a big advantage. Most of the syntax is standard SQL anyway. And how often does it occur that you change one database for another? Finally, changing one database for another and expecting this has no side effects is naive. In fact it may bring quite substantial problems with it.

Readability and Maintainability: SQLAlchemy code tends to be more readable and easier to maintain than raw SQL queries. It leverages Python classes and objects to represent database tables and rows, making your code intuitive and self-explanatory.

It is true that adding a join to a query requires less code. Except from that, plain SQL is just as readable and just as easy to maintain.

Security: SQLAlchemy includes built-in protection against SQL injection. It automatically escapes user input, effectively thwarting malicious SQL code execution.

Every modern database library includes protection against SQL injection.

Queries are optimized and may work better than SQL if you wrote it yourself unless you are an SQL veteran.

This is also deceptive. Alchemy is slower by default, simply because it forms a wrapper around SQL. Furthermore, since Alchemy has a different conceptual model than SQL, it is not clear what SQL queries are actually performed. Only by learning the ins and outs of Alchemy, and tweaking many of the model declarations, will you be able to ensure that Alchemy performs efficient queries. Of course there will be some edge cases in which Alchemy queries are actually more efficient than plain SQL queries, but only the ones written by someone with little experience in SQL.

On StackOverflow, there are, at the time of writing, 16,000 questions with SQLAlchemy tag. Therefore, you should not be concerned about unsolvable bugs when using this tool.

Quite a relief!

Disadvantages

There could be instances where Alchemy might become inefficient. Therefore, knowing SQL is always desired.

When performance matters you will need to learn Alchemy from A to Z. Alchemy still allows you to use plain SQL queries, so this may be a way to escape.

Performance overhead: SQLAlchemy introduces a slight performance overhead compared to raw SQL queries. While this overhead is generally inconsequential for most applications, it may become a concern for high-performance scenarios.

True

Learning Curve: SQLAlchemy has a learning curve, particularly for developers new to ORMs. It takes some time to become proficient in SQLAlchemy’s API and concepts.

Believe me, it takes a lot of time. Alchemy has many fixes for problems it created itself. It is a completely different mental model you need to acquire and you need to switch between this model and the plain SQL model.

Additional disadvantages

These disadvantages come from me.

Alchemy diffuses domain model code with storage code.

It's good practice to separate domain model code with database storage code. It makes you aware of database queries and allows you to focus on them. Alchemy combines the two. It may create a database request when you just get an attribute. This is not a problem for a simple operation. But when you operate on thousands of objects, you will need to know exactly what Alchemy does internally, and make tweaks to the model to ensure that it performs the right kind of queries.

Asynchronous access

When developing asynchronous code for Alchemy, you need to make changes in a number of places to prevent Alchemy from making implicit database calls. Reading about that in Grinberg's book positively scared me.

Additional advantages

Alembic

To be fair, Alchemy comes with a tool that is actually quite nice, and which is not often mentioned in the advantages I read. It's Alembic, the database migration tool that generates queries based on the changes to the model you make. This allows you to keep your model up to date and have Alembic take care of migrations.

Conclusion

As this is a critical article, I recommend against using SQLAlchemy. Writing plain SQL queries is just easier, and gives you more control. Creating simple ORM functionality and writing migration scripts doesn't need to be complicated either.

References

https://www.analyticsvidhya.com/blog/2022/07/a-brief-introduction-to-sql-alchemy/

https://python.plainenglish.io/sqlalchemy-vs-raw-sql-queries-in-python-a-comparative-example-b7a838ebef82

https://trantriducs.medium.com/orm-and-sqlalchemy-the-magic-wand-in-database-management-a2d02877a57a


ChatGPT on Rule Based vs LLM

Here's a recent interaction I had with ChatGPT about rule based systems vs LLM's. Je zei: if we compare rule based natural language...