TradeStation

Summer Intern 2026 - Data Engineering (AI)

Back to All Jobs
Employee Type

Full-Time

Location

Plantation, FL (Hybrid)

Job Type

Business Services

Job ID

3277

#WeAreTradeStation

Who We Are:

TradeStation is the home of those born to trade. As an online brokerage firm and trading ecosystem, we are focused on delivering the ultimate trading experience for active traders and institutions. We continuously push the boundaries of what's possible, encourage out-of-the-box thinking, and relentlessly search for like-minded innovators.

At TradeStation, we are building an AI-First culture. We expect team members to embrace AI as a core part of their daily workflow, whether that's using AI to accelerate development, enhance decision-making, improve client outcomes, or streamline internal processes. We hire, grow, and promote people who can harness AI responsibly and creatively. We treat AI as a partner in problem-solving, not just a tool; following our governance standards to ensure AI is used ethically, securely, and transparently. If you join us, you're joining a culture where AI is how we work.

What We Are Looking For:

The Data Engineering Intern will contribute to the development, testing, and automation of data pipelines and workflows within our data engineering and AI ecosystems. This role offers the opportunity to work with cutting‑edge technologies, including machine learning, generative AI, and AI agent frameworks, while developing scalable and automated solutions. The intern will collaborate closely with the team to ensure high‑quality deliverables, experiment with AI‑driven acceleration tools, prototype LLM‑powered agents for workflow automation, and apply best practices in modern data engineering and AI integration.

What You'll Be Doing:

  • Experiment with generative AI to optimize ETL logic, code templates, and metadata enrichment.
  • Build and evaluate AI agents for automated data quality checks, documentation generation, and pipeline monitoring.
  • Develop and automate testing suites for data engineering workflows and AI-driven applications to ensure reliability and performance.
  • Support the development of multi‑agent workflows where multiple AI agents collaborate on data validation, profiling, and anomaly detection.
  • Prototype autonomous or semi‑autonomous AI agents to assist with routine engineering tasks (e.g., monitoring pipelines, generating documentation, performing schema checks, triaging pipeline failures).
  • Contribute to the development and deployment of AI models and algorithms within data pipelines.
  • Collaborate with data engineers and AI specialists to identify opportunities for leveraging machine learning and AI tools.
  • Troubleshoot and resolve issues in data workflows, AI models, and automation processes.

The Skills You Bring:

  • Familiarity with large language models (LLMs) or modern AI APIs (OpenAI, Azure OpenAI, Anthropic, etc.).
  • Understanding of RAG pipelines, prompt engineering, or fine‑tuning.
  • Strong analytical and problem-solving skills with a passion for data, technology, and AI.
  • Familiarity with one or more programming languages (e.g., Python, SQL, or similar) and AI frameworks or libraries (e.g., TensorFlow, PyTorch, or scikit-learn).
  • Basic understanding of data engineering concepts such as ETL processes, data pipelines, or big data tools.
  • Exposure to AI concepts, including machine learning, natural language processing, or computer vision.
  • Ability to work collaboratively in a team environment and take initiative on tasks.
  • Interest in learning about AI integration in data engineering, automation, and best practices.

Minimum Qualifications:

  • Currently pursuing a Bachelor's or Master's degree in Computer Science, Data Engineering, Data Science, Artificial Intelligence, or a related field.
  • Experience with programming languages such as Python, SQL, or similar.
  • Basic knowledge of data engineering concepts, including ETL processes, data pipelines, and databases.
  • Familiarity with AI or machine learning concepts and libraries (e.g., TensorFlow, PyTorch, scikit-learn).

Desired Qualifications:

  • Experience with big data tools and platforms, such as Apache Spark, Hadoop, or Databricks.
  • Familiarity with cloud platforms like AWS or Azure.
  • Hands-on experience with version control systems (e.g., Git).
  • Exposure to MLOps practices, including deploying and monitoring machine learning models.
  • Experience with data visualization tools (e.g., Tableau, Power BI) or Python libraries (e.g., Matplotlib, Seaborn).
  • Understanding of data governance, security, and compliance practices.

Benefits at TradeStation

  • Collaborative work environment
  • Competitive Salaries
  • Yearly bonus
  • Comprehensive benefits for you and your family starting Day 1
  • Unlimited Paid Time Off
  • Flexible working environment
  • TradeStation Account employee benefits, as well as full access to trading education materials
Learn more about our mission

TradeStation provides equal employment opportunities to current and prospective employees, without regard to race, color, religion, sex, national origin, ancestry, sexual orientation, age, pregnancy, disability, handicap, citizenship, veteran or marital status, or any other legally recognized status entitled to protection under federal, state, or local anti-discrimination laws.