Skip to projects

Nicholas Brezinski

Software Engineer

Based in Tokyo, Japan. I'm a CS new grad building applications integrated with my hobbies: computer vision for rock climbing, language learning software, and improving business efficiency for rock climbing gyms.

Looking for roles in software engineering, data/ML, and AI. Exploring micro controllers next. Open to opportunities—let's connect.

github:@brezys
email:

Projects

DropMap

open source
DropMap thumbnail

Fortnite Optimization Tool

DropMap is a physics-driven route planner for Fortnite drops: given a target POI, it computes the earliest reachable jump point from the moving battle bus under a simplified dive/glide model. It treats the bus as a time-parameterized trajectory, then searches along that path for the first time where your modeled travel time (bus-to-jump delay + descent/glide to target) is minimized while still guaranteeing reachability. The app also outputs a separate “Aim Point” (where to steer post-jump) and renders the resulting geometry as a clear path visualization so the recommendation is explainable rather than a black box. The model and constants are documented in the repo (bus speed, heights, dive/glide speeds), making the planner tunable as game movement mechanics change. Built with Next.js + React + TypeScript, it’s essentially a small simulation + optimization engine wrapped in a fast UI for repeatable decision-making.

TypeScriptNext.jsReactFortnite API
2026GitHub

Vibetool

open source
Vibetool thumbnail

Lightweight Dictation Desktop App

Vibetool is an always-anywhere speech-to-text utility: it streams microphone audio (via PyAudio), feeds it into a local Vosk speech recognition model, and emits text in real time. Instead of only showing transcripts inside the app, it types the recognized text at your current cursor location, so it works in any editor or text field. The UX is intentionally minimal (start/stop listening) because the real “integration point” is the operating system’s focused input target, not a custom editor. The model is an explicit dependency (vosk-model-small-en-us-0.15), keeping the pipeline offline-friendly and predictable (no network inference latency). Cross-platform friction is handled via documented install paths for the audio stack (PortAudio/PyAudio), which is usually the hardest part of desktop STT apps.

PythonPyAudioVoskPortAudio
2025GitHub

ClimbingAnalysis

archived
ClimbingAnalysis thumbnail

Camera-first Climbing Analysis Pipeline

ClimbAnalysis is a camera-first analysis pipeline: you record a climb, run MediaPipe pose estimation frame-by-frame, and convert keypoints into movement features like joint angles, body lean, arm extension, and center-of-gravity motion. For route context, users upload a wall/route image and the system performs color-based hold detection, turning holds into structured “targets” the climber’s hands/feet can be compared against. Sessions are then stored with synchronized metrics so playback can render skeletal overlays and support attempt review. The “comparison” mode aligns two attempts to contrast efficiency (time/path) and highlight technique differences as measurable deltas. Made with vibe coding and abandoned.

TypeScriptReactMediaPipePostCSSTailwindshadcn/ui
2024GitHub