
Dogukan Uraz T.
RE/RS, Self-Improvement, Program Synthesis and Scaling LawsI work on the intersection of self-improving AI, program synthesis, neural scaling laws and agent infrastructure for science. I see finding new ways to scale reasoning and improvement as a stepping stone to super-reasoning. Although there are many technical topics that interest me the most are meta-learning, meta-optimization, multi-step reasoning, program synthesis, interpretability, compute-optimal scaling.
Recent Posts & Works
April 2, 2025
Building a Scalable Document Processing System with Gemini 2.0 and Elasticsearch
January 22, 2025
Revisiting Superintelligence Control Problem with Newer Extensions
January 2, 2025
New Year, New Blog
Archive, 2024
Tensor Decompositions
Published on Dec 31, 2024
Mixture-of-Experts Process Groups Initialization
Published on Dec 31, 2024
Real-Time Text and Voice-Based Retrieval Augmented Generation Pipeline with Gemini 2.0 for Long Documents
Published on December 25, 2024
OpenAI's Code Execution Runtime & Replicating Sandboxing Infrastructure
Published on July 2, 2024
Linen Modules for Simple Long Convolutions Sequence Modeling with Log-Linear Time Complexity
Published on March 20, 2024
[Let's Review] Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Published on March 7, 2024
[Let's Review] Gradient Ascent + Jax Implementation
Published on March 11, 2024
Tri-RMSNorm: Yielding Speed-up for Training Stabilization
Published on March 14, 2024
Enhancing Performance with C/C++ Code Execution for Langchain Agents
Published on July 18, 2024
Utilizing Airflow for Planning, Scheduling, Executing and Scaling AI Agentic Workloads
Published on May 6, 2024