What We Need
This role involves working closely with cross-functional teams to translate business needs into effective AI-driven solutions. The ideal candidate is someone who is not only familiar with LLMs and prompt engineering but also possesses the ability to think critically about how these technologies can be applied to solve real-world problems. You will be at the forefront of defining AI strategies, experimenting with cutting-edge techniques, and shaping the future of our AI-powered products.
Responsibilities
- Prompt Engineering & LLM Optimization: Design, develop, and refine prompts for various applications, including Assistants, leveraging platforms like OpenAI, Anthropic, Gemini as well as Open source models. Define output formats (like JSON schemas) and ensure optimal performance through rigorous testing and iteration, potentially using prompt engineering and LLM tracing tools like Langtail.
- AI Solution Design: Translate business requirements into AI-driven strategies, with a focus on feasibility, cost-efficiency, and long-term maintainability.
- Semantic Search and Embedding Expertise: Leverage your understanding of embedding models and vector databases to enhance our semantic search capabilities. This could include tasks like finding similar terms in a database and reranking documents.
- Microservice Maintenance and Extension: Maintain and, when necessary, extend an existing Python-based microservice focused on semantic similarity search. Ensure the microservice's continued performance and reliability using Flask or FastAPI.
- AI Research & Application: Stay abreast of the latest trends in AI, including LLMs, generative AI, and related fields. Apply this knowledge to develop innovative solutions and enhance existing offerings.
- Collaboration and Communication: Work closely with project managers, software developers, and stakeholders to understand client requirements and ensure the delivery of high-quality solutions. Clearly communicate complex technical concepts and results to both technical and non-technical audiences.
- Performance Monitoring: Monitor and improve the performance of existing AI solutions, ensuring scalability and reliability.
Qualifications
- Strong understanding of LLMs, prompt engineering, and generative AI. Demonstrated experience working with LLMs across different platforms (OpenAI, Anthropic).
- Experience with semantic search and embedding models. Familiarity with vector databases, particularly Elasticsearch, and embedding APIs like Voyage AI.
- Proficiency in Python, sufficient to maintain and extend an existing microservice built with Flask or FastAPI.
- Familiarity with RESTful API design and development.
- Experience with tools like Hugging Face Transformers or similar.
- Strong problem-solving skills and the ability to translate complex business requirements into technical solutions.
- Excellent communication skills for collaboration with cross-functional teams and stakeholder presentations.
- Eagerness to learn new concepts and apply them to solve real-world problems.
- English language proficiency - full professional proficiency
- Czech and Slovak-speaking candidates are preferred
Bonus points for
- Experience with Langtail or similar low-code prompt engineering tools.
- Knowledge of containerization technologies (Docker, Kubernetes).
- Familiarity with Azure cloud platform.
You are going to love this job if you
- Are passionate about the potential of AI and eager to push the boundaries of what's possible with LLMs.
- Enjoy the challenge of crafting effective prompts and optimizing their performance.
- Can bridge the gap between technical AI capabilities and business needs.
- Are a proactive learner who stays up-to-date with the rapidly evolving AI landscape.