I recently read “Why Scientists are Turning to Rust” and found myself disagreeing with its core premise. At same time, there is also a broader pattern in the “Python vs. Rust” claiming that Rust is the future of scientific area–for Python+C/Fortran is current optimal–discourse that fundamentally misunderstands how scientific computing actually works.
The Article’s Main Arguments
The article presents four key points:
- Performance Benefits: Examples include processing genomic data with claimed speedups (e.g., 4x improvement for geospatial coordinate conversion after 2-3 months of porting)
- Memory Safety: Citing Microsoft’s data that 70% of security bugs relate to memory safety, with compile-time error catching as a major advantage
- Superior Toolchain: Unified tooling with Cargo, excellent error messages, and an active ecosystem (50,000+ crates)
- Real-World Adoption: Projects like Varlociraptor, Sourmash, Terminus, and production use at 10x Genomics
Why These Arguments Miss the Mark
1. The Performance Argument is Misleading
The article’s performance examples raise immediate questions:
- Why not simply write performance-critical components in C/C++ with Python bindings, following the established pattern of tools like BWA and Minimap2?
- Were the Python implementations properly optimized using NumPy vectorization, Numba JIT, or Cython before comparison?
- Are we comparing Rust against naive Python implementations or against the optimized numerical libraries that scientists actually use?
The reality is that scientific Python already leverages highly optimized C/C++ libraries (MKL, BLAS, LAPACK) through NumPy and SciPy. For specialized needs outside NumPy’s domain–like string-based sequence analysis in bioinformatics and $g(r)$ with complex exclusion rules–we have mature solutions: Numba (including CUDA support), Cython, or traditional C/Fortran extensions.
2. Memory Safety is a Solution Looking for a Problem
The article’s emphasis on memory safety reveals a fundamental misunderstanding of scientific computing:
# Scientific code reality:
- Run once, get results, move on
- Crashes mean "rerun the experiment," not "system failure"
- Rapid iteration matters more than bulletproof code
- We're not building 24/7 services
Memory safety is crucial for operating systems and web servers. For research scripts that process data and produce plots? It’s over-engineering.
3. The Ecosystem Question Goes Unanswered
The article never addresses the elephant in the room: Why abandon decades of optimized numerical libraries?
Python’s scientific stack represents thousands of person-years of optimization work. There’s no compelling reason to reimplement NumPy, BLAS, or other battle-tested tools in Rust solely for theoretical safety benefits.
4. Selection Bias
The article only interviews Rust converts. It doesn’t ask:
- How many scientists tried Rust and returned to Python?
- Why aren’t mainstream tools (BLAST, GATK, SAMtools) being rewritten in Rust?
- Why isn’t the NumPy/SciPy community moving to Rust?
What the Article Gets Right
To be fair, Rust does have legitimate use cases in scientific computing:
- Long-term production tools: Companies like 10x Genomics need maintainable, performant code that teams can collaborate on. Here, Rust’s advantages over C++ are real.
- Excellent tooling: Cargo is undeniably superior to CMake and the C++ build ecosystem.
- Welcoming community: An inclusive community does matter for adoption.
The Real Story: Python + C/Fortran Remains Optimal
For most scientists, the established pattern works perfectly:
def scientific_analysis():
# Python: High-level orchestration and prototyping
data = load_and_preprocess()
# C/Fortran: Performance-critical computation
results = optimized_library.compute(data)
# Python: Analysis and visualization
return analyze_and_plot(results)
When specialized needs go beyond libraries like NumPy, the solution hierarchy is:
- NumPy vectorization
- Numba JIT compilation
- Cython
- C/Fortran extensions
- Existing optimized libraries
Rust might fit somewhere around #6.
The Fundamental Disconnect
Rust solves problems scientists rarely face:
- Preventing data races in concurrent systems
- Ensuring memory safety in long-running services
- Compile-time guarantees for mission-critical software
Meanwhile, it introduces friction scientists can’t afford:
- Slow compilation times
- Steep learning curve
- Complex type system
- Fighting the borrow checker instead of exploring data
Conclusion
This article reads more like Rust evangelism than rational analysis. The “Python is too slow” narrative misunderstands how scientific computing works: Python serves as an orchestration layer while optimized compiled code does the heavy lifting.
Most importantly, the article ignores that bottlenecks in scientific computing are usually algorithmic, not language-related. An $O(n^2)$ algorithm in Rust is still slower than an $O(n \log n)$ algorithm in Python.
For individual developers who enjoy Rust? Go for it. But claiming it’s the future of scientific computing ignores the reality of how science is actually done. The Python + C/Fortran combination remains the sweet spot for good reasons: it balances rapid prototyping, excellent performance through mature libraries, and “good enough” safety for research contexts.
In scientific computing, Rust is largely a solution in search of a problem.

No Comments.
You can use <pre><code class="language-xxx">...<code><pre> to post codes and $...$, $$...$$ to post LaTeX inline/standalone equations.