Low-latency language tooling: Rethinking the Kotlin LSP for mobile environments

Share This Post

Until recently, writing Kotlin on your phone meant that you couldn’t access the code intelligence that Android developers rely on in traditional desktop IDEs. Code on the Go’s experimental features now include a full Kotlin Language Server Protocol (LSP) that offers real-time completions, inline error diagnostics, go-to-definition, and more, all running locally on your Android device.

Running a language server on a phone isn’t as simple as porting a desktop tool. Desktop language servers are built around heavy infrastructure: the Kotlin compiler, large SDK tooling, multiple processes, and generous memory budgets as well as mouse-based cursor-control, and none of that works easily or well on a phone. But, given Kotlin’s prominence in Android app development, it was important for us to figure out how to add the same kind of language support for Kotlin as we do for Java and XML.

Here’s how we approached the key engineering challenges.

Parsing without the Kotlin compiler

The most direct route to parsing Kotlin would be using the Kotlin compiler’s own parser. It’s authoritative and handles every edge case, but it’s far too heavy for mobile. Startup time alone would make it unusable, and memory usage would be a problem on most devices.

Instead, we use Tree-sitter, a parsing library written in C that we call through the Java Native Interface (JNI). Tree-sitter has three properties that make it nicely practical on mobile:
  • Speed. Typical parse times are under 10ms for a 1,000-line file, which is fast enough to re-parse on every keystroke without a noticeable delay.
  • Incremental parsing. When you change a few characters, Tree-sitter only updates the affected portion of the syntax tree instead of re-parsing the whole file. This keeps analysis responsive during active editing.
  • Error tolerance. Tree-sitter produces a partial syntax tree even when your code has errors. That matters in an editor, where code is frequently incomplete or broken mid-edit.

The tradeoff is that Tree-sitter doesn’t provide the deep semantic information the Kotlin compiler would. We compensate with our own semantic analysis layer built on top of the parse trees, including a symbol resolver, type inferrer, and overload resolver.

Three-layer symbol indexing

When you trigger completions, the server needs to quickly find every symbol that could be relevant. We maintain three separate indexes, queried in priority order:

  1. Project index: Symbols from your own source files. These appear first in results and update incrementally as you edit.
  2. Standard library index: The entire Kotlin standard library, pre-indexed at build time (more on this below).
  3. Classpath index: Symbols from your project’s dependencies, pulled from JAR and AAR files in the Gradle build cache.

Results from all three layers are combined, de-duplicated, and ranked before you see them.

Pre-computing the standard library

The Kotlin standard library contains over 100,000 symbols, including extensions on String, List, Map, Sequence, and more. Loading and indexing all that at runtime would be too slow on a phone, so we don’t. We built a tool called kotlin-stdlib-generator that runs at build time, not on your device. It uses Kotlin reflection to walk every public class, function, property, and extension in the standard library, then serializes everything into a pre-built index file embedded directly in the app. When the language server starts up, the standard library is simply there: no scanning, no compilation, no delay.

Running in-process, not as a separate service

Traditional LSP setups run the language server as a separate process and communicate over network sockets or standard I/O. That works fine on a desktop, where creating processes and passing messages between them is inexpensive. But on mobile, that overhead adds up.

Our Kotlin language server runs in the same process as the editor, in the same JVM. Communication happens through direct method calls and an event bus rather than serialized JSON messages. This removes an entire layer of latency that traditional LSP clients have to deal with.

When you type, the editor publishes a change event. The language server picks it up, waits for a 500ms window to batch rapid keystrokes, then runs the analysis pipeline on a background thread. Results flow back through the same in-memory channel. The editor stays responsive while the analysis runs in the background.

Classpath indexing without full decompilation

If your project uses AndroidX, Material components, or your own library modules, the language server needs to know about those classes to provide accurate completions and diagnostics. We index your project’s classpath by reading class metadata (names, members, method signatures, type hierarchies) from JAR and AAR files in the Gradle build cache. We don’t perform full decompilation, which keeps the process fast and memory-efficient. When dependencies change after a build or sync, only the affected files are re-indexed.

How it works in practice

There’s nothing to configure. Open a .kt file and the language server starts automatically.
  • Completions appear as you type on trigger characters, or you can invoke them manually.
  • Errors show up inline. Tap a diagnostic to see the full message.
  • Long-press a symbol or use the context menu for go-to-definition or find references.
  • Tap an identifier to see type information.
  • Use the outline view to navigate your file, or search for symbols across your workspace.

If your project has been built, classpath indexing starts automatically and your full dependency tree is available for completions and diagnostics.

Closing the gap: What it means for mobile Kotlin development

Writing Kotlin on a phone has typically meant accepting a limited experience, without the same language intelligence you get on a desktop IDE. Our new support for the Kotlin LSP closes that gap in some very meaningful ways.

Code on the Go now provides the same categories of language support you’d expect from any desktop IDE: completions, diagnostics, navigation, type information, and semantic understanding. All of it runs locally on your device, without needing a remote server connection.

We’re continuing to expand our Kotlin support to include type inference coverage, improve completion relevance, and deepen semantic analysis. If you’re building Android apps with Kotlin on Code on the Go, please consider enabling experimental features and trying the LSP. Then, let us know what you think of the new capabilities as well as what you think we should add next. Happy coding!