Impact: HIGH (N HTTP requests reduced to 1, prevents timeouts and connection exhaustion)
Every Supabase .insert(), .update(), or .upsert() call is an HTTP request to the PostgREST API. When you perform these operations inside a loop — inserting rows one by one, or updating records individually — you create N HTTP round trips where a single batch call would suffice. This wastes network bandwidth, adds cumulative latency (each round trip adds 50-200ms), and can trigger rate limits or timeouts on serverless platforms with 10-second execution limits.
Supabase's .insert() and .upsert() natively accept arrays of objects, batching the entire operation into a single HTTP request and a single SQL transaction.
Incorrect (inserting rows one at a time in a loop):
// ❌ One HTTP request per row — O(n) network round tripsexport async function importProducts( supabase: SupabaseClient, csvRows: ProductRow[]) { const results: ImportResult[] = [] for (const row of csvRows) { // ❌ Each iteration fires a separate HTTP request const { data, error } = await supabase .from('products') .insert({ name: row.name, sku: row.sku, price: parseFloat(row.price), category_id: row.category_id, description: row.description, stock_quantity: parseInt(row.stock), }) .select('id') .single() results.push({ sku: row.sku, success: !error, id: data?.id, error: error?.message, }) } return results}// 500 products = 500 HTTP requests = ~50 seconds at 100ms per request// Serverless timeout hit at ~100 rows
Incorrect (updating records one at a time):
// ❌ Updating order statuses one by oneexport async function markOrdersAsShipped( supabase: SupabaseClient, orderIds: string[], trackingNumbers: Map<string, string>) { for (const orderId of orderIds) { // ❌ Each update is a separate HTTP request await supabase .from('orders') .update({ status: 'shipped', shipped_at: new Date().toISOString(), tracking_number: trackingNumbers.get(orderId), }) .eq('id', orderId) }}// 200 orders = 200 HTTP requests
Incorrect (Promise.all doesn't fix the underlying problem):
// ❌ Promise.all reduces wall time but still creates N connectionsexport async function createNotifications( supabase: SupabaseClient, userIds: string[], message: string) { // ❌ Still N HTTP requests — just concurrent instead of sequential await Promise.all( userIds.map((userId) => supabase.from('notifications').insert({ user_id: userId, message, read: false, created_at: new Date().toISOString(), }) ) )}// 1000 users = 1000 concurrent HTTP requests = potential rate limiting
Correct (batch insert with array):
// ✅ Single HTTP request for all rowsexport async function importProducts( supabase: SupabaseClient, csvRows: ProductRow[]) { // ✅ Transform all rows first, then insert in one batch const products = csvRows.map((row) => ({ name: row.name, sku: row.sku, price: parseFloat(row.price), category_id: row.category_id, description: row.description, stock_quantity: parseInt(row.stock), })) const { data, error } = await supabase .from('products') .insert(products) // ✅ Array of objects — single HTTP request .select('id, sku') if (error) throw error return data}// 500 products = 1 HTTP request = ~200ms total