Skip to projects

Nicholas Brezinski

Software Engineer

Based in Tokyo, Japan. CS new grad building applications integrated with my hobbies: computer vision for rock climbing, language learning software, and improving business efficiency for rock climbing gyms.

Looking for roles in software engineering, data/ML, and AI. Exploring micro controllers next. Open to opportunities—let's connect.

github:@brezys
email:

Projects

KataMichi (片道)

closed sourcein development
KataMichi (片道) thumbnail

Learn Japan Through Travel Planning

KataMichi (片道) is a map-first travel planning tool designed to help you learn Japan by exploring it through one-way trips. You pick a starting prefecture/airport, a rough timeframe, and a region you want to visit, and the app visualizes estimated flight prices across that region—then lets you "chain" the journey by treating each selected destination as the next origin. Along the way, it surfaces practical prefecture insights (cost vibe, climate, highlights) so planning a route doubles as learning geography and travel context. Built with a Leaflet-based interactive map and a budget-aware live pricing backend (Cloudflare Worker + caching), it stays fast and scalable while keeping real-world flight data optional and controlled.

TypeScriptNext.jsReactFlights APILeafLetCloudflare

DiscordDrive

open source
DiscordDrive thumbnail

Semantically search for vectorized images across your Discord server.

Media retrieval tool for Discord that makes server images searchable by meaning, not just by scrolling through message history. It supports a hybrid search flow: manual tags boost exact matches when users label an image directly, and a semantic fallback uses OpenCLIP embeddings to retrieve visually or conceptually similar images when strong tag matches are missing. Under the hood, the app combines a Discord bot, a backend embedding/vector search pipeline with Qdrant, and a frontend for querying and tag management. The result is a lightweight full stack system that experiments with semantic retrieval inside community media archives using a local Docker-based workflow.

TypeScriptNext.jsReactOpenCLIPDiscord APIQdrantDocker

DropMap

open source
DropMap thumbnail

Fortnite Optimization Tool

DropMap is a physics-driven route planner for Fortnite drops: given a target POI, it computes the earliest reachable jump point from the moving battle bus under a simplified dive/glide model. It treats the bus as a time-parameterized trajectory, then searches along that path for the first time where your modeled travel time (bus-to-jump delay + descent/glide to target) is minimized while still guaranteeing reachability. The app also outputs a separate "Aim Point" (where to steer post-jump) and renders the resulting geometry as a clear path visualization so the recommendation is explainable rather than a black box. Built with Next.js + React + TypeScript, it's essentially a small simulation + optimization engine wrapped in a fast UI for repeatable decision-making.

TypeScriptNext.jsReactFortnite API
2026GitHub

Vibetool

open source
Vibetool thumbnail

Lightweight Dictation Desktop App

Vibetool is an always-anywhere speech-to-text utility: it streams microphone audio (via PyAudio), feeds it into a local Vosk speech recognition model, and emits text in real time. Instead of only showing transcripts inside the app, it types the recognized text at your current cursor location, so it works in any editor or text field. The UX is intentionally minimal (start/stop listening) because the real "integration point" is the operating system's focused input target, not a custom editor. The model is an explicit dependency (vosk-model-small-en-us-0.15), keeping the pipeline offline-friendly and predictable (no network inference latency). Cross-platform friction is handled via documented install paths for the audio stack (PortAudio/PyAudio), which is usually the hardest part of desktop STT apps.

PythonPyAudioVoskPortAudio
2025GitHub

ClimbingAnalysis

archived
ClimbingAnalysis thumbnail

Camera-first Climbing Analysis Pipeline

ClimbAnalysis is a camera-first analysis pipeline: you record a climb, run MediaPipe pose estimation frame-by-frame, and convert keypoints into movement features like joint angles, body lean, arm extension, and center-of-gravity motion. For route context, users upload a wall/route image and the system performs color-based hold detection, turning holds into structured "targets" the climber's hands/feet can be compared against. Sessions are then stored with synchronized metrics so playback can render skeletal overlays and support attempt review. The "comparison" mode aligns two attempts to contrast efficiency (time/path) and highlight technique differences as measurable deltas. Made with vibe coding and abandoned.

TypeScriptReactMediaPipePostCSSTailwindshadcn/ui
2024GitHub

LanGAPP

archived
LanGAPP thumbnail

Real-Time Natural Translation Service

LanGAPP is NO LONGER BEING DEVELOPED (since 2023), but is an open-source personal project aimed at developing an efficient and real-time translation service using Python. By leveraging advanced neural networks, voice-to-text, and text-to-voice techniques, the language barrier between English and Spanish, as well as English and Japanese translations, has been not only squashed, but improved upon due to the familiar sounding cadence and intonation of the voices originating from the neural networks.

PythonNeural NetworksNatural Language ProcessingDockerVB-Virtual AudioElevenLabsDeepLCmd ToolVoicevox
2023GitHub