Cloud Engineer

Vulcan AI is looking for passionate Cloud Engineers

Cloud Engineer

Cloud Engineer

Vulcan AI is looking for passionate Cloud Engineers

Vulcan AI is looking for passionate Cloud Engineers


We are an enterprise AI solution provider that helps businesses do more with less. Better outcomes, with lesser resources and carbon footprint. We build AI to better manage the Internet of Things (IoT) and power responsible enterprises.

We build intelligent applications that incorporate industry, functional processes and engineering know-how into AI applications that can be deployed >10X faster than typical consulting engagements.

Our products are elegantly and beautifully designed not just on the outside, but also on the inside, creating a User Experience that both Users and Enterprises love.

We are a team of people who are passionate about AI, IoT and Big Data and who love their respective crafts. The founding members are leaders from Accenture and EY where they incubated and scaled AI practices. The Management Team has a successful track record of delivering successful AI and IoT projects across APAC and building analytics businesses that were very successful commercially. Our AI team members have several research papers and patents in the field.

As a Cloud Infrastructure/DevOps/Data Engineer you will work alongside with talented data engineering team and data scientist team to build and deploy AI applications for our clients on cloud data ecosystems. You will use your expertise to solve complex challenges in data engineering and deployment for the SPOCK platform.

As a Cloud Infrastructure/DevOps/Data Engineer you will work alongside with talented data engineering team and data scientist team to build and deploy AI applications for our clients on cloud data ecosystems. You will use your expertise to solve complex challenges in data engineering and deployment for the SPOCK platform.

Responsibilities

Responsibilities

  • Develop and test the cloud infrastructure at scale on Cloud
  • Develop and Deploy automated Cloud infrastructure provisioning using advanced DevOps and automation tools
  • Monitor, manage, and respond to incidents in order to improve automate
  • Improve automated cloud configuration, deployments, monitoring, management and incident response
  • Work with various teams to improve data infrastructure through automation
  • Work very closely and cross-functionally with, and incorporate feedback from Product Management, Designers, and data scientist.
  • Build and improve data engineering application development for data science that enable data scientist to build an end-to-end AI application quickly.
  • Build internal tools to demonstrate performance and operational efficiency
  • Work with teams to resolve issues related to application configuration, deployment, or debugging and proactively strive to improve our products and technologies.
  • Provide documentation and training of duties to new staff and related groups.
  • Provide system administration, configuration, and troubleshooting of the Linux environment.
  • Author, maintain, and deploy scripts in a customer’s cloud and on-premise infrastructures.
  • Must have can do attitude and evolve yourself by learning new techniques, workarounds and tools every day
  • Help build a team and cultivate innovation.

Requirements

Requirements

  • Preference for Bachelor’s/Master’s or Doctoral degree in Computer Science, Electrical Engineering, or related engineering field.
  • Ability to communicate the logic of design changes in a way that product stakeholders can understand and relate to.
  • Ability to iterate on designs and solutions efficiently and intelligently.
  • 7+ years of experience with design and operation of robust distributed data ecosystems.
  • 3+ years of experience with monitoring, provisioning, scheduling and altering cloud platforms as IaaS, PaaS, DaaS, SaaS etc.
  • 3+ years of experience provisioning all types of commercial cloud provider(AWS, Azure, Google etc.).
  • 3+ years of experience with using and developing technologies like Mongo, InfluxDB, Redis, TimescaleDB, Apache Cassandra, CouchBase/CouchDB, Spark(Scala/Python), Relational Databases, PostgresSQL, RedShift, Azure CosmosDB or Amazon DynamoDb, ElasticSearch, Docker, AWS EMR, Azure Databricks, HDInsight etc.
  • 2+ years of strong experience in CI/CD tools such as Jenkins, Bamboo, GitLab, Git command line, Puppet, Chef, Bamboo, Ansible etc.
  • 4+ years of strong experience in shell scripting(Bash/Ksh/Csh), Scala, Python, Jscript, Ruby, Go etc.
  • 2+ years of strong experience in build and deploy containerization using Docker(Compose/Services/Stack/secure), CoreOS Rkt, LXC Linux Containers etc.
  • Working experience on container orchestration tools such K8S, Docker Swarm, AWS ECS, Azure Container Service, Google Container Engine etc. will be added advantage.
  • Solid understanding and usage of monitoring tools (Prometheus, Dynatrace, Splunk etc).
  • Any experience or knowledge on Cloud IoT platforms such as Azure IoT Hub, AWS IoT Services/Solution, Google Cloud IoT etc will be added advantage.
  • Knowledge of performance benchmarking and diagnostic tools.

Preferred

Preferred

  • General knowledge of big data, analytics, and cloud technologies.
  • Understanding of machine learning concepts, examples, and benefits.
  • General knowledge of big data, analytics, and cloud technologies.
  • Ability to juggle multiple priorities and effectively deliver in a fast-paced, dynamic environment.
  • General knowledge of big data, analytics, and cloud technologies.
  • Knowledge of SBC(Single Board Computers) such as Nvidia Jetson Nano, Raspberry Pi, PCDuino, BeagleBone(Black/Blue), Arduino etc
  • General knowledge of big data, analytics, and cloud technologies.
  • Knowledge of Apache Flink with Scala/Java API knowledge
  • General knowledge of big data, analytics, and cloud technologies.
  • Singapore Citizen/Permanent Resident
  • General knowledge of big data, analytics, and cloud technologies.
  • For the right candidate Employment Pass will be sponsored (>8 years of experience)
  • General knowledge of big data, analytics, and cloud technologies.

As a Cloud Infrastructure/DevOps/Data Engineer you will work alongside with talented data engineering team and data scientist team to build and deploy AI applications for our clients on cloud data ecosystems. You will use your expertise to solve complex challenges in data engineering and deployment for the SPOCK platform.

As a Cloud Infrastructure/DevOps/Data Engineer you will work alongside with talented data engineering team and data scientist team to build and deploy AI applications for our clients on cloud data ecosystems. You will use your expertise to solve complex challenges in data engineering and deployment for the SPOCK platform.

Responsibilities

Responsibilities

  • Develop and test the cloud infrastructure at scale on Cloud
  • Develop and Deploy automated Cloud infrastructure provisioning using advanced DevOps and automation tools
  • Monitor, manage, and respond to incidents in order to improve automate
  • Improve automated cloud configuration, deployments, monitoring, management and incident response
  • Work with various teams to improve data infrastructure through automation
  • Work very closely and cross-functionally with, and incorporate feedback from Product Management, Designers, and data scientist.
  • Build and improve data engineering application development for data science that enable data scientist to build an end-to-end AI application quickly.
  • Build internal tools to demonstrate performance and operational efficiency
  • Work with teams to resolve issues related to application configuration, deployment, or debugging and proactively strive to improve our products and technologies.
  • Provide documentation and training of duties to new staff and related groups.
  • Provide system administration, configuration, and troubleshooting of the Linux environment.
  • Author, maintain, and deploy scripts in a customer’s cloud and on-premise infrastructures.
  • Must have can do attitude and evolve yourself by learning new techniques, workarounds and tools every day
  • Help build a team and cultivate innovation.

Requirements

Requirements

  • Preference for Bachelor’s/Master’s or Doctoral degree in Computer Science, Electrical Engineering, or related engineering field.
  • Ability to communicate the logic of design changes in a way that product stakeholders can understand and relate to.
  • Ability to iterate on designs and solutions efficiently and intelligently.
  • 7+ years of experience with design and operation of robust distributed data ecosystems.
  • 3+ years of experience with monitoring, provisioning, scheduling and altering cloud platforms as IaaS, PaaS, DaaS, SaaS etc.
  • 3+ years of experience provisioning all types of commercial cloud provider(AWS, Azure, Google etc.).
  • 3+ years of experience with using and developing technologies like Mongo, InfluxDB, Redis, TimescaleDB, Apache Cassandra, CouchBase/CouchDB, Spark(Scala/Python), Relational Databases, PostgresSQL, RedShift, Azure CosmosDB or Amazon DynamoDb, ElasticSearch, Docker, AWS EMR, Azure Databricks, HDInsight etc.
  • 2+ years of strong experience in CI/CD tools such as Jenkins, Bamboo, GitLab, Git command line, Puppet, Chef, Bamboo, Ansible etc.
  • 4+ years of strong experience in shell scripting(Bash/Ksh/Csh), Scala, Python, Jscript, Ruby, Go etc.
  • 2+ years of strong experience in build and deploy containerization using Docker(Compose/Services/Stack/secure), CoreOS Rkt, LXC Linux Containers etc.
  • Working experience on container orchestration tools such K8S, Docker Swarm, AWS ECS, Azure Container Service, Google Container Engine etc. will be added advantage.
  • Solid understanding and usage of monitoring tools (Prometheus, Dynatrace, Splunk etc).
  • Any experience or knowledge on Cloud IoT platforms such as Azure IoT Hub, AWS IoT Services/Solution, Google Cloud IoT etc will be added advantage.
  • Knowledge of performance benchmarking and diagnostic tools.

Preferred

Preferred

  • General knowledge of big data, analytics, and cloud technologies.
  • Understanding of machine learning concepts, examples, and benefits.
  • General knowledge of big data, analytics, and cloud technologies.
  • Ability to juggle multiple priorities and effectively deliver in a fast-paced, dynamic environment.
  • General knowledge of big data, analytics, and cloud technologies.
  • Knowledge of SBC(Single Board Computers) such as Nvidia Jetson Nano, Raspberry Pi, PCDuino, BeagleBone(Black/Blue), Arduino etc
  • General knowledge of big data, analytics, and cloud technologies.
  • Knowledge of Apache Flink with Scala/Java API knowledge
  • General knowledge of big data, analytics, and cloud technologies.
  • Singapore Citizen/Permanent Resident
  • General knowledge of big data, analytics, and cloud technologies.
  • For the right candidate Employment Pass will be sponsored (>8 years of experience)
  • General knowledge of big data, analytics, and cloud technologies.

Get Started

Get in touch to learn more about how Vulcan AI can partner with you for Agri & Forestry Solutions