- Spearheaded agentic engineering adoption at Jobcase.
- Supported search and match team work prioritization, design, architecture and planning throughout several rounds of company downsizing.
- Directed and conducted several rounds of performance overhaul for Jobcase Search infrastructure and related services.
- Brought Jobcase Solr infrastructure current with respect to LTS versions of critical components (Solr, JVM, etc), increasing performance, reducing operational risks and costs.
- Designed and implemented a set of complex changes to the budgeting rule system, allowing to extract much needed additional revenue from job searches.
- Integrated LLM-driven, low-code/no-code analyst-controlled semantic tagging into search results.
- Search performance optimizations and R&D, informing search infrastructure investments for consecutive two years.
- Conducted R&D work and POCs concerning the use of large decision forests during real-time document scoring with demonstrable results.
- Conducted R&D work and POCs concerning novel methods of hybrid spacial-semantic vector searches with demonstrable results.
Uses a variant of the Jobcase Feature Store to provide a AWS Valkey-based durable, low-latency store for job listings. It is used internally by the various services involved in job searching and profile matching to hydrate results of the search. Job store also allows for millisecond-latency updates to the non-indexed fields of job listings.
Java · Spring · Docker · Kubernetes · AWS Valkey · Apache Solr
Major migration effort across several services and custom Solr components, bringing Solr infrastructure current, and runnable on top-of-the-line JDKs and ARM architecture. Included decommissioning number of legacy components and services, introduction of partitioned cores, federated search capabilities, and hydration from Job cache.
Java · Spring · Docker · Kubernetes · AWS ElastiCache ValKey · Apache Solr · Apache Solr custom components
Series of tactical projects aimed at reducing search latency, demonstrated reduction of 95th percentile from over 1 second to 0.2 seconds for typical job searches. Techniques included key-space partitioning, spacial indexing, high-dimensionality anchor indexing, HNSW indices, query sharding.
Java · Apache Solr · PostgreSql
Increase efficiency and accuracy of the advertiser's budget capping. Includes improvements (and partial rework) of the budget residual computations and evaluation of pricing rules. Increased per-client yield and reduced over-billing. Used functional programming and pre-aggregated data state to speed up the budgeting rule processing, which made the new fine-grained budget enforcement, including negative matches, feasible for production use.
Java · Spring · Docker · Kubernetes · AWS Kinesis
Uses LLM-generated labeling during search to surface important aspects of job, company or employment to the user. Uses case taxonomy of LLM-generated labels, and several embeddings to control how the tags are surfaced to the user for maximizing the impact.
Java · Apache Solr custom components · Caffeine
Rework of job listing ingestion pipelines, with intent to use heavy massive parallel computations at ingestion time, including LLM workloads, as well as support numerous destination systems (Solr singleton and partitioned cores, Job cache, vector databases, etc.). Replaces a swarm of legacy scripts and database-to-database batch processors, uses streams for workload serialization, takes advantage of JEP-444 virtual threading for parallel computations, allowing long (tens of seconds) processing per job listing without degradation of service.
Java · Spring · Docker · Kubernetes · AWS ElastiCache ValKey · Apache Solr · PostgreSQL (with vector extensions)
Joint prototyping with Jobcase analytics / ML team for replacing state-heavy and relatively slow pricing models with pre-trained decision forests (blended multi-factor decision trees). Too slow originally (100K-500K evaluations per second per thread), the final optimized version achieved 1.3-1.5M evaluations per second per thread, which made it real-time feasible for production. POC effectively became a CSV tree model to JVM bytecode compiler (using evaluation graph optimizations and generating bytecode using ByteBuddy tools). Direct output into JVM bytecode is due to the size of the model "formulas" which are too big for normal Java compiler.
Java · ByteBuddy · JVM
Some of Jobcase projects have been using containerized development environments (earlier using IntelliJ, later - VScode remote extensions, often with DevPod, but often as-is) to standardize on dev environments and provide fast context switching and onboarding. Most of the projects I contributed to run in containerized dev environments. In 2025 we supported the executive push for AI adoption by embedding the Claude into these development environments and furnished it with centrally- and team-managed skill sets specific to projects. This resulted in basically overnight delivery of AI adoption to many of the Jobcase projects, enabling the engineers to use new agentic capabilities and protocols, such as awareness of GitLab and JIRA systems, DbC grade code reviews, RFR protocols, agent familiarity with tools, standards, infrastructure principles and so on.