Job Board

San Francisco 2019

April 1
Training
April 2
Conference

Minimum qualifications:

  • BS degree in Computer Science, similar technical field of study or equivalent practical experience.
  • Software development experience in one or more general purpose programming languages.
  • Experience working with two or more from the following: web application development, Unix/Linux environments, mobile application development, distributed and parallel systems, machine learning, information retrieval, natural language processing, networking, developing large software systems, and/or security software development.
  • Working proficiency and communication skills in verbal and written English.

Preferred qualifications:

  • Master’s, PhD degree, further education or experience in engineering, computer science or other technical related field.
  • Experience with one or more general purpose programming languages including but not limited to: Java, C/C++, C#, Objective C, Python, JavaScript, or Go.
  • Experience developing accessible technologies.
  • Interest and ability to learn other coding languages as needed.

Minimum qualifications:

  • BS degree in Computer Science, similar technical field of study or equivalent practical experience
  • 5 years of software development experience in Java, C++, C#, or Golang
  • Experience in designing and developing large scale distributed systems, API, workflow, concurrency, multithreading and synchronization
  • Experience in developing Windows Agent software and/or SQL Server features and in designing, implementing, and running SaaS on SQL Server and Windows environment

Preferred qualifications:

  • 10 years of relevant work experience in software development
  • Experience in building infrastructure as service (IasS), platform as a service (PaaS), etc.
  • Experience in running SQL Server and/or Window production environment for SaaS
  • Experience in building cloud-based Database or Storage services

Minimum qualifications:

  • BS in Computer Science or related technical field or equivalent practical experience
  • Software development experience in one or more general purpose programming languages
  • Experience in at least one of the following: test automation, refactoring code, test-driven development, build infrastructure, optimizing software, debugging, building tools and testing frameworks

Preferred qualifications:

  • Master's or PhD in Computer Science or related technical field
  • Experience with one or more general purpose programming languages including but not limited to: Java, C/C++, C#, Objective C, Python, JavaScript, or Go
  • Scripting skills in Python, Perl, Shell or another common language
  • Our Senior Engineers are end-to-end owners. You will participate actively in all aspects of designing, building, and delivering products for our clients.
  • We have dozens of individual, mission-focused teams working across a wide spectrum of technological challenges. You will have the opportunity, depending on your interests and aptitude, to work on large-scale distributed systems coordinating thousands of servers in cloud and physical data centers around the world, petabyte-scale data challenges, machine learning, advanced visualizations, and interactive user interfaces – to name a few.
  • Senior Engineers contribute to more than our product – they build up our team. Through a combination of mentoring, technical leadership, and/or direct management of small teams, they make others better and raise the bar for those around them. 

We are looking for full-stack SOA engineers with grit. Every day we're solving problems that have either never been solved before, or have never been solved at this scale. We run our team very fast and very lean, which means every engineer has a high degree of ownership and potential impact – and we are looking for candidates with the chops to handle it.

Recently named one of the top 10 most promising companies in America by Forbes Magazine and one of the "Best Places To Work" in the nation by Outdoor Magazine, The Trade Desk offers a culture of "relaxed intensity" – one that comes from working alongside one of the most talented teams in our industry, and leading in a race that is ours to lose.

BASIC QUALIFICATIONS

BASIC QUALIFICATIONS

· Bachelor’s Degree in Computer Science or related field
· Equivalent experience to a Bachelor's degree based on 3 years of work experience for every 1 year of education
· 5+ years professional experience in software development
· Experience taking a leading role in building complex software systems that have been successfully delivered to customers
· Proficiency in, at least, one modern programming language such as C, C++, C#, Java, or Perl
· Excellent communication skills and the ability to work well in a team.
· Ability to excel in a fast-paced, startup-like environment.

PREFERRED QUALIFICATIONS

PREFERRED QUALIFICATIONS

· Experience building extremely high volume and highly scalable web services.
· Experience building highly available systems and operating 24x7 services.
· Experience with distributed systems, consistent hashing, distributed locking, replication, and load balancing.
· Working knowledge of Kubernetes, Hadoop, MapReduce, Storm, Spark or other Big Data processing platform.
· Experience with at least one modern scripting language such as Ruby, Python or PHP.
· Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations.
· Strong customer focus, ownership, urgency and drive.
· Master’s degree or PhD in Computer Science.

BASIC QUALIFICATIONS

· 2+ years of non-internship professional software
development experience.
· Programming experience with at least one modern
language such as Java, C++, or C# including object-oriented design.
· 1+ years of experience contributing to the architecture
and design (architecture, design patterns, reliability and scaling)
of new and current systems.
· 

PREFERRED QUALIFICATIONS

· Experience making contributions to opensource platforms.
· Experience building extremely high volume and highly scalable online services.
· Experience operating highly available services.
· Experience with distributed systems, consistent hashing, distributed locking, check-pointing, and load balancing.
· Working knowledge of Hadoop, MapReduce, Kafka, Kinesis, Spark or other Big Data processing platforms.
· Ability to excel in a fast-paced, startup-like environment.
· Experience mentoring other engineers.
· Strong problem solving ability and object-oriented design skills.

You will love this job if you …

  • … have a good knowledge of Java (Scala is a plus)
  • … have an aptitude for simple, robust, and elegant designs, including how to design appealing APIs and libraries
  • … have experience in working collaboratively on large code bases is a plus

What we offer … 

  • Competitive salary
  • Great career opportunities in a world-class team with peers from top-companies, universities and research institutes.
  • Tech gear of your choice
  • International team environment (10 nationalities so far)
  • Flexible working arrangements (home office, flexible working hours)
  • Unlimited vacation policy, so take time off when you need it
  • Snacks, coffee and beverages in the office
  • Relocation assistance if needed
  • Hackathons and weekly technical Lunch Talks to keep your head full of inspirations and ideas!
  • Subsidized Gym membership (Urban Sports Club)
  • Subsidized German classes in the office
  • Free Lunch 3 times a week in the office
  • Free public transportation ticket

You will love this job if you …

  • … have experience in building large data processing or distributed systems during PhD research or prior work experience
  • … have a deep understanding of one or more of the following areas: distributed systems, database systems, performance optimization
  • … have a strong foundation of algorithms and application design
  • … have an aptitude for simple, robust, and elegant designs, including how to design appealing APIs and libraries
  • … have experience in developing systems or working on large code bases in any programming language

What we offer …

  • Competitive salary
  • Great career opportunities in a world-class team with peers from top-companies, universities and research institutes.
  • Tech gear of your choice
  • International team environment (10 nationalities so far)
  • Flexible working arrangements (home office, flexible working hours)
  • Unlimited vacation policy, so take time off when you need it
  • Snacks, coffee and beverages in the office
  • Relocation assistance if needed
  • Hackathons and weekly technical Lunch Talks to keep your head full of inspirations and ideas!
  • Subsidized Gym membership (Urban Sports Club)
  • Subsidized German classes in the office
  • Free Lunch 3 times a week in the office
  • Free public transportation ticket

What you’ll do all day:

  • Use your experience to solve challenging data engineering and stream processing problems for our customers
  • Meet with customers, understand their requirements, and help guide them towards best-of-breed architectures
  • Provide guidance and coding assistance during the implementation phase and make sure projects end in successful production deployments
  • Become an Apache Flink and stream processing expert

You will love this job if you …

  • … are experienced in building and operating solutions using distributed data processing systems on large scale production environments (e.g. Hadoop, Kafka, Flink, Spark)
  • … are fluent in Java and/or Scala
  • … love to spend the whole day talking about Big Data technologies
  • … have great English skills and like talking to customers
  • … like traveling around the Europe/USA and visiting new places

What we offer:

  • Competitive salary
  • International team environment (10 nationalities so far)
  • Flexible working arrangements (home office, flexible working hours)
  • Unlimited vacation policy, so take time off when you need it

What you’ll be doing:
As a Senior Java Developer, you will architect the components and servers that our
customers use to solve their biggest problems. The mission of a Java Developer is to design
and build capabilities that allow users to analyze their data to meet their needs. They are
involved in all stages of the product development and deployment lifecycle: idea
generation, user interviews, planning, design, prototyping, execution, shipping and iteration:
 Will code, test, debug, and install both new programs / technologies and changes to
existing programs / technologies of a complex nature with minimal assistance
 Will design programs / technologies under the direction of Technical Lead and
Project Managers
 Work the architecture of system design, where your contribution fits into the overall
project scope allowing you to have a big picture understanding
What we need
 Must have a minimum of 4+ years of work experience in a similar position or
product development
 Ability to write clean, maintainable code
 Strong engineering background
 Familiarity with data structures, storage systems, cloud infrastructure, distributed
computing, and other technical tools

 Proficiency in: Java, pluses: Apache Flink, Apache Kafka, Elixir, Phoenix, or other big-
data framework(s)

 Maintain code integrity and organization
 Proficient experience using server APIs (REST, JS-API, GraphQL, etc)
 A good understanding of: the software development process including development
and deployment
 Understanding and implementation of security and data protection
 Requires a bachelor’s degree or technical certification or equivalent work
experience
What we want:
 Skill and comfort working in a rapidly changing environment with dynamic
objectives and iteration with users
 Must be able to meet tight deadlines in a fast-paced environment and handle
multiple assignments / projects at once
 Be able to communicate and work with people of all technical levels in a team
environment
 Be willing to take feedback and incorporate it into your work
 Be willing take direction from team lead but must be self-managing and make
decisions with minimal supervision
 Ability to deal positively with shifting priorities

Additional Requirements:
 Must work from our Irvine office location
 Be willing to travel to client (on occasion)
Benefits:
 Competitive Salary
 Generous medical, dental, and vision plans
 Vacation, sick, and paid holidays offered
Stand / sit workstations
 Kitchen stocked with snacks and drinks
 Work with talented and collaborative co-workers
 Casual environment

Applications: jobs@cogility.com

Responsibilities:

  • Design and own the way real-time data is consumed, stored, and shared with the entirety of Lyft
  • Build and operate large-scale distributed systems (Kafka, Flink, Zookeeper, etc)
  • Write well-crafted, well-tested, readable, maintainable code
  • Participate in code reviews to ensure code quality and distribute knowledge, including Open-Source projects
  • Share your knowledge by giving brown bags, tech talks, and evangelizing appropriate tech and engineering best practices

Experience and Skills:

  • 3+ years of experience in Streaming and Real-time Applications
  • Experience in either streaming platforms (Flink, Spark, or similar) or distributed messaging (Kafka, Kinesis, or similar)
  • Understanding of distributed systems concepts and principles (consistency and availability, liveness and safety, durability, reliability, fault-tolerance, consensus algorithms, etc)
  • BA/BS in Computer Science, Math, Physics, or another technical field, or equivalent

Responsibilities:

  • Build and operate large-scale data infrastructure programs (performance, reliability, monitoring)
  • Write well-crafted, well-tested, readable, maintainable code
  • Participate in code reviews to ensure code quality and distribute knowledge, including Open-Source projects
  • Share your knowledge by giving brown bags, tech talks, and evangelizing appropriate tech and engineering best practices

 Experience & Skills:

  • Deep understanding of distributed systems concepts and principles (consistency and availability, liveness and safety, durability, reliability, fault-tolerance, consensus algorithms)
  • Experience bringing open source software to production at scale (Yarn, HDFS, Hive, Spark, Presto, ZooKeeper, Airflow)
  • Experience designing, implementing and debugging distributed systems that run across thousands of nodes
  • Hands-on experience with Hadoop (or similar) ecosystem - Yarn, Hive, HDFS, Spark, Presto, Parquet, HBase
  • Experience working with and building real-time compute and streaming infrastructure - Kafka, Kinesis, Flink, Storm, Beam
  • Experience configuring, identifying performance bottlenecks and tuning MPP databases
  • Able to think through long-term impacts of key design decisions and handling failure scenarios
  • Experience with workflow management (Airflow, Oozie, Azkaban, UC4)

What You Will Do:

  • Build systems that can effectively store and crunch terabytes of data, and power amazing experiences for Yelp’s users.
  • Learn the fine art of balancing scale, latency and availability depending on the problem.
  • Work with product management and data science to identify and use data that is most relevant to the problem at hand.
  • Observe the power of AI from up close but more importantly, bring it to the mass(es) of data we have at Yelp.

We Are Looking For:

  • 2+ years of relevant industry experience building large scale distributed systems.
  • Deep understanding of the programing languages and systems that you've worked on.
  • A passion for architecting large systems with elegant interfaces that can scale easily.
  • A love for writing beautiful code. We use Java & Python. You don’t need to be an expert, but experience is a plus and we will expect you to learn them on the job.
  • Comfort in running services or batches in a Unix environment.
  • Minimum BA/BS degree in Computer Science, Math, or related degree.
  • A love for delighting Yelp’s users with experiences they shouldn’t live without.
  • If you don't have at least one year of experience in a similar role, please take a look at our College Engineering roles instead!

Pluses:

  • If you have what we are looking for above, reach out! Everything below is either something you are aware of already, or we will provide you the opportunity to learn on the job.
  • Exposure to one or more technologies amongst the likes of ElasticSearch, Hadoop/MapReduce, Spark, NoSQL systems like Cassandra or AWS DB services.

Core Responsibilities

-Enterprise-Level architect for 'Big Data' Event processing, analytics, data store, and cloud platforms.

-Enterprise-Level architect for cloud applications and 'Platform as a Service' capabilities

-Detailed current-state product and requirement analysis.

-Security Architecture for 'Big Data' applications and infrastructure

-Ensures programs are envisioned, designed, developed, and implemented across the enterprise to meet business needs. Interfaces with the enterprise architecture team and other functional areas to ensure that most efficient solution is designed to meet business needs.

-Ensures solutions are well engineered, operable, maintainable, and delivered on schedule. Develops, documents, and ensures compliance with best practices including but not limited to the following coding standards, object oriented design, platform and framework specific design concerns and human interface guidelines.

-Tracks and documents requirements for enterprise development projects and enhancements.

-Monitors current and future trends, technology and information that will positively affect organizational projects; applies and integrates emerging technological trends to new and existing systems architecture. Mentors team members in relevant technologies and implementation architecture.

-Contributes to the overall system implementation strategy for the enterprise and participates in appropriate forums, meetings, presentations etc. to meet goals.

-Gathers and understands client needs, finding key areas where technology leverage is possible to improve business processes, defines architectural approaches and develops technology proofs. Communicates technology direction.

-Monitors the project lifecycle from intake through delivery. Ensures the entire solution design is complete and consistent from the start and seeks to remove as much re-work as possible.

-Works with product marketing to define requirements. Develops and communicates system/subsystem architecture. Develops clear system requirements for component subsystems.

-Acts as architectural lead on project.

-Applies new and innovative ideas to old or new problems. Fosters environments that encourage innovation. Contributes to and supports effort to further build intellectual property via patents.

-Consistent exercise of independent judgment and discretion in matters of significance.

-Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary.

-Other duties and responsibilities as assigned.

Requirements:

-Demonstrated experience with 'Platform as a Service' (PaaS) architectures including strategy, architectural patterns and standards, approaches to multi-tenancy, scalability, and security.

-Demonstrated experience with schema and data governance and message metadata stores

-Demonstrated experience with public cloud resources such as AWS.

-Demonstrated experience with cloud automation technologies including Ansible, Terraform, Chef, Puppet, etc

-Hands-on experience with Data Flow processing engines, such as Apache NiFi

-Working knowledge / experience with Big Data platforms (Kafka, Hadoop, Storm/Spark, NoSQL, In-memory data grid)

-Working knowledge / experience with Linux, Java, Python.

   

Education Level

- Bachelor's Degree or Equivalent

   

Field of Study

- Engineering, Computer Science

   

Years Experience

11+ years in Software Engineering Experience

4+ years in Technical Leadership roles

1+ years in Cloud Infrastructure

Eventador.io is a fully managed, enterprise-grade Stream Processing as a Service platform—built on Apache Kafka and Apache Flink—based in Austin, TX. Whether customers are just getting started with Apache Kafka or they have built their business on streaming data, Eventador unlocks the ability to quickly and easily deploy streaming data-driven applications by handling the complexity of the underlying infrastructure with high-quality software and amazing support.

An ideal candidate should have deep experience in a number of these areas:
  • Operations: Ubuntu, RHEL, Ansible, Boto, Docker, Kubernetes
  • Cloud Engineering: AWS, Scalability/High-availability design (AWS)
  • Databases: PostgreSQL/MySQL/MongoDB
  • Data: Kafka, Flink, Hadoop/HDFS, Spark, Storm
  • Development: Python/Flask/Jinja2, Go, C, Java/Scala
Most importantly, you should be comfortable working directly with customers to provide a great experience! Eventador provides a great working environment, and great benefits. Equity is negotiable.