Best practices for Snowflake cost and access control

As organizations scale their cloud data warehouse usage, administrators inevitably face two critical challenges: optimizing compute costs and implementing fine-grained access controls. Any organization looking to migrate to—or scale on—Snowflake successfully requires a strategy that balances federated governance with performance for technical teams and fiscal responsibility for the business. 

This article shares best practices from the federated model and custom tooling Capital One developed to manage data in Snowflake cost-effectively, while instituting resource-level role-based access controls (RBAC).

The dual challenge of Snowflake administration

Imagine you are an administrator who recently completed a proof of concept on a new Snowflake account and your organization is slowly preparing to migrate users into Snowflake. As an admin, you will be faced with two main challenges: first, creating warehouses, and second, managing access to these warehouses.

Warehouse creation needs input from the data teams who know their workloads and what the best warehouse configuration would be to meet SLAs or cost goals. The business, on the other hand, has the responsibility to pay for this warehouse and wants to have control over what the total cost should be and ensure that it does not spiral out. Lastly, when a team creates a warehouse, the platform team worries about performance. So, how do we get them all on the same page?

Cost-effective warehouse provisioning with a federated model

To enable optimal performance, Capital One adopted a federated model for warehouse provisioning, anchored by Capital One Slingshot.

The federated approval workflow

Instead of a centralized IT decision, a tech team that is responsible for the workload defines the initial warehouse configuration, including: size, maximum cluster count and a performance schedule.

  1. Defining the schedule: The team uses Slingshot to define a dynamic warehouse schedule with a larger size for peak hours and a smaller, step-down configuration for non-peak hours.

    • The team has created warehouse templates in Slingshot that include size and schedule templates. These templates can then be reused by teams to speed up the approval process for provisioning new warehouses. 

  1. Cost transparency: Slingshot immediately calculates and demonstrates the full-capacity cost. 

    • For example, if a medium-sized warehouse with three clusters is running, Slingshot calculates the total cost based on full capacity:

      12 credits x 24 hours x 30 days = 8,640 credits 

      By multiplying the total credit count by the Snowflake rate, Slingshot provides an estimated dollar value of the cost savings and performance impact of applying the dynamic schedule.

  1. Tiered review: The proposed schedule then proceeds through a tiered approval flow:

    • The Snowflake Platform Team reviews the proposed size and schedule for technical best practices, recommending changes if necessary.

    • The Business Owner provides final approval, evaluating the associated cost and ensuring alignment with the budget.

    • Upon approval, the warehouse is provisioned and all relevant stakeholders are notified.

Continuous warehouse optimization with Slingshot

Once the initial schedule is approved, Slingshot adopts it as a baseline and continuously tracks actual usage patterns. 

  • Cost-saving recommendations: Slingshot continuously tracks warehouse workload changes and generates new, cost-saving recommendations for the warehouse based on actual usage, enabling cost-effective management.

  • Performance-related changes: At Capital One, changes to the Slingshot schedule that result in credit savings do not require an approval process. However, any performance-related recommendations that may increase costs must follow the existing workflow for business team approval.

  • Automated implementation: Slingshot can be set to automatically implement recommendations as they become available, streamlining the optimization process. Alternatively, teams have the flexibility to review and modify the recommendations before accepting them. At Capital One, we automate the process for non-critical environments (Dev/QA).

Simplified federated access

A common pitfall is granting different users the same permissions. Data-access roles should not have the same privileges as warehouse-compute roles, for example. If they do, you might inadvertently allow users to utilize warehouses not owned by their team, leading to increased costs and reduced visibility.

Traditionally, organizations store tables in schemas and databases, granting access roles to users for data access. Often, for simplicity, these same data-related roles are also granted to the warehouses. 

To address these concerns and align with Snowflake's principle of separating compute and storage, we implemented distinct access policies for data and warehouse compute.

Schema-based warehouse access roles

To simplify management, we streamlined the naming convention by using different roles for data access and compute usage. 

  • Dedicated compute roles: To ensure effective access control, we established specific warehouse access roles. These roles were named to reflect the corresponding line of business and its sub-line of business. 

  • Streamlined access: Data access for users is controlled through horizontal data privilege roles. This implementation enforces the principle of least privilege, granting access only to a specific set of pre-approved tables.

Ongoing access review for least privilege

To enforce the zero-trust principle of least privilege, users needing access to a specific computing warehouse are required to submit a request through our governance system and are subject to ongoing access reviews. 

  1. Request and approval: Users submit a request to the desired team and line-of-business warehouse role, which is then routed to the designated warehouse owner for approval. This access is not permanent, even upon approval.

  2. Automated audit: On an ongoing basis, the system generates a full report of all users with access to a specific warehouse.

  3. Review notification: The report is automatically emailed to the user, their manager, and the warehouse owner, enabling a regular audit and timely removal of unnecessary access. 

This automated process results in every team member consistently maintaining the least access necessary to perform their job.

With only approved users from the appropriate team having access to Snowflake warehouses, your enterprise could also benefit from:

  • Cost savings: Restricting warehouse access only to necessary team members is vital for efficient workload management and ensures that the most critical tasks are prioritized.

  • Reduced Snowflake disruptions: Controlling access increases the availability of the warehouse and reduces warehouse disruptions.

  • Federated access: Warehouse access is only granted to team members within a specific group of teams, and requires warehouse owner approval if they are not already in that group.

Conclusion

Capital One demonstrates that a centralized, data cloud management solution is key to managing a large-scale Snowflake environment. Slingshot is more than an optimization tool. It acts as a governance hub, offering:

  • Detailed showback and chargeback: Slingshot provides granular chargeback and showback at the LOB, team and even user level, allowing warehouse owners to easily identify and monitor high-cost users.

  • Account-wide visibility: For the platform team and account owner, Slingshot offers a comprehensive view of usage across the entire Snowflake account, enabling the recommendation of best practices.

By automating cost estimation, scheduling, optimization and access controls, Slingshot simplifies warehouse creation and ensures that performance and cost efficiency remain in continuous balance across the organization.


Ganesh Bharathan, Director of Data Engineering, Capital One Software

Ganesh Bharathan is a Director of Data Engineering for Capital One Software, the enterprise B2B software business of Capital One. He leads a team of engineers responsible for managing the Snowflake platform and Slingshot innovations. Ganesh has a master's degree in Information Technology from George Mason University and more than 20 years of experience in database architecture and performance optimization. He is a Snowflake subject matter expert, holding several of Snowflake’s SnowPro Advanced certifications, including Architect, Administrator and Data Engineer.

Related Content

Line graph illustration on blue background.
Article | February 5, 2026
Article | January 29, 2026 |5 min read
illustration of a light blue cloud graphic with line drawings of circles, arrows and pixels denoting data
Article | October 22, 2025 |8 min read