Incidents | Sieve Incidents reported on status page for Sieve https://status.sievedata.com/ https://d1lppblt9t2x15.cloudfront.net/logos/fac45e7b2a0539331480947fccd9968a.png Incidents | Sieve https://status.sievedata.com/ en API is down. Google Cloud has an outage. https://status.sievedata.com/incident/601644 Thu, 12 Jun 2025 21:00:00 -0000 https://status.sievedata.com/incident/601644#548db6517996db17fd7e77486171710873a252f9bc9ed7f8744b608ad6e2e2d0 Google and Cloudflare have resolved their issues, and we have restarted our services and are back online. Apologies for any inconvenience caused. API is down. Google Cloud has an outage. https://status.sievedata.com/incident/601644 Thu, 12 Jun 2025 21:00:00 -0000 https://status.sievedata.com/incident/601644#548db6517996db17fd7e77486171710873a252f9bc9ed7f8744b608ad6e2e2d0 Google and Cloudflare have resolved their issues, and we have restarted our services and are back online. Apologies for any inconvenience caused. L4 recovered https://status.sievedata.com/ Thu, 12 Jun 2025 20:43:39 +0000 https://status.sievedata.com/#d5b553bb92a2550a81fab5578b93a656f6c5c6346f3bdcd82cac068ee51ab47a L4 recovered CPU recovered https://status.sievedata.com/ Thu, 12 Jun 2025 20:42:59 +0000 https://status.sievedata.com/#051318679a9ce21576ea2c500dace3e8537d306fc17f1688b5b016ce0f677d3a CPU recovered A100 recovered https://status.sievedata.com/ Thu, 12 Jun 2025 20:42:50 +0000 https://status.sievedata.com/#e547c3b3ec3d1811e5559a30564c16e55bf727cb75b83d18ab85cbe8168589a3 A100 recovered API is down. Google Cloud has an outage. https://status.sievedata.com/incident/601644 Thu, 12 Jun 2025 18:12:00 -0000 https://status.sievedata.com/incident/601644#32c2a6fa52064b418d6a20658007aefa8a8fc003f42a35eb1205d2e7401270d1 Update here, both Cloudflare and Google are down. We are in touch with the teams here and will update as soon as we can. API is down. Google Cloud has an outage. https://status.sievedata.com/incident/601644 Thu, 12 Jun 2025 18:12:00 -0000 https://status.sievedata.com/incident/601644#32c2a6fa52064b418d6a20658007aefa8a8fc003f42a35eb1205d2e7401270d1 Update here, both Cloudflare and Google are down. We are in touch with the teams here and will update as soon as we can. API is down. Google Cloud has an outage. https://status.sievedata.com/incident/601644 Thu, 12 Jun 2025 17:53:00 -0000 https://status.sievedata.com/incident/601644#285aa9fad52fb2a8975a91d90830e3168d8db1b6a2e5ac8d8655f32ad1c92a3a Our API is currently down, we are investigating right now API is down. Google Cloud has an outage. https://status.sievedata.com/incident/601644 Thu, 12 Jun 2025 17:53:00 -0000 https://status.sievedata.com/incident/601644#285aa9fad52fb2a8975a91d90830e3168d8db1b6a2e5ac8d8655f32ad1c92a3a Our API is currently down, we are investigating right now L4 went down https://status.sievedata.com/ Thu, 12 Jun 2025 17:41:58 +0000 https://status.sievedata.com/#d5b553bb92a2550a81fab5578b93a656f6c5c6346f3bdcd82cac068ee51ab47a L4 went down CPU went down https://status.sievedata.com/ Thu, 12 Jun 2025 17:41:47 +0000 https://status.sievedata.com/#051318679a9ce21576ea2c500dace3e8537d306fc17f1688b5b016ce0f677d3a CPU went down A100 went down https://status.sievedata.com/ Thu, 12 Jun 2025 17:41:38 +0000 https://status.sievedata.com/#e547c3b3ec3d1811e5559a30564c16e55bf727cb75b83d18ab85cbe8168589a3 A100 went down Sieve Job Processing is Degraded https://status.sievedata.com/incident/583913 Fri, 30 May 2025 02:41:00 -0000 https://status.sievedata.com/incident/583913#87a5aea3dfac011682ce85a106e3c9e84de1e9f95f2a38cb4fba51640553f29a Jobs were backed up due to an issue with our cloud provider. The immediate problem is resolved and we are now working through the backlog of jobs Issue has been resolved Several Sieve APIs stuck in processing https://status.sievedata.com/incident/557079 Tue, 06 May 2025 01:14:00 -0000 https://status.sievedata.com/incident/557079#6da4c80e0ddc695c5222079eecc8445fc064eefa9af60d668cb3e138432c6a04 Job processing issue has been resolved. All jobs should complete normally Several Sieve APIs stuck in processing https://status.sievedata.com/incident/557079 Mon, 05 May 2025 23:15:00 -0000 https://status.sievedata.com/incident/557079#4d5314d8b92d0d217b77237e0b67205e963890ad13fefddd2cd58ea114027261 Several Sieve APIs, including Dubbing, Lipsync, and Audio-Enhance, are currently stuck processing due to an issue with our internal file handling. We are working on a fix. File servers overwhelmed causing dubbing, autocrop slowdowns https://status.sievedata.com/incident/542148 Tue, 08 Apr 2025 22:46:00 -0000 https://status.sievedata.com/incident/542148#5b662b0a1e0395a21c699344e19de1236fd97b81c5094e4df7e21bb696989c33 . L4 recovered https://status.sievedata.com/ Tue, 08 Apr 2025 19:01:36 +0000 https://status.sievedata.com/#60e54b02c0be37445cf12741f5261854196fbca21dfdcd8f3b3930739848d611 L4 recovered L4 went down https://status.sievedata.com/ Tue, 08 Apr 2025 18:54:36 +0000 https://status.sievedata.com/#60e54b02c0be37445cf12741f5261854196fbca21dfdcd8f3b3930739848d611 L4 went down API requests have slow roundtrip time https://status.sievedata.com/incident/542150 Tue, 08 Apr 2025 09:30:00 -0000 https://status.sievedata.com/incident/542150#7a351d130e42487d8fe90501dc446d820b48e560ae25ed2a7305493b4db7f094 This issue is resolved. A full RCA can be viewed here: https://docs.google.com/document/d/1h79gU9BnvsVNplL2N7n60Uwq5SLMibGhrfqbMxN66cc/edit?tab=t.0 API requests have slow roundtrip time https://status.sievedata.com/incident/542150 Tue, 08 Apr 2025 06:00:00 -0000 https://status.sievedata.com/incident/542150#45e1c7acce3f6738a36d631f4f03c44c8194a09ad0ea8a5686adc43328ef393c . L4 recovered https://status.sievedata.com/ Tue, 08 Apr 2025 01:18:47 +0000 https://status.sievedata.com/#0aa26cc6a021f62b42624d7fd061815dbdbeca95ca6e774da773ec1b72a54f72 L4 recovered L4 went down https://status.sievedata.com/ Tue, 08 Apr 2025 01:11:47 +0000 https://status.sievedata.com/#0aa26cc6a021f62b42624d7fd061815dbdbeca95ca6e774da773ec1b72a54f72 L4 went down 1 in 20 API requests are hanging or failing with 500s https://status.sievedata.com/incident/542149 Mon, 07 Apr 2025 22:00:00 -0000 https://status.sievedata.com/incident/542149#b4a22f40ce1fe6f5891805f209cd0a0bf24b9bd850240511eb549162718b1992 This issue is resolved. Full RCA can be viewed here: https://docs.google.com/document/d/1KLIZIAYAuy5XgmIy8lInfxQcgT32DyIN4O58gfM3Dsg/edit?tab=t.0 L4 recovered https://status.sievedata.com/ Sun, 06 Apr 2025 22:42:13 +0000 https://status.sievedata.com/#61566573d25981f829d0d93597647d10523ca4177f07b3ba0bcc31423069f10f L4 recovered L4 went down https://status.sievedata.com/ Sun, 06 Apr 2025 22:26:12 +0000 https://status.sievedata.com/#61566573d25981f829d0d93597647d10523ca4177f07b3ba0bcc31423069f10f L4 went down L4 recovered https://status.sievedata.com/ Sun, 06 Apr 2025 22:14:16 +0000 https://status.sievedata.com/#6777c5ffc7cfa5877f99d4f662ded44039ede9e76bd305684e0fa53b6cad989e L4 recovered L4 went down https://status.sievedata.com/ Sun, 06 Apr 2025 21:53:13 +0000 https://status.sievedata.com/#6777c5ffc7cfa5877f99d4f662ded44039ede9e76bd305684e0fa53b6cad989e L4 went down L4 recovered https://status.sievedata.com/ Sun, 06 Apr 2025 21:46:11 +0000 https://status.sievedata.com/#d90e6c4a534fc8947e0ec00eef08629a9cbb0c59d70ee764f3909df3cfcb2d4b L4 recovered L4 went down https://status.sievedata.com/ Sun, 06 Apr 2025 21:28:14 +0000 https://status.sievedata.com/#d90e6c4a534fc8947e0ec00eef08629a9cbb0c59d70ee764f3909df3cfcb2d4b L4 went down L4 recovered https://status.sievedata.com/ Sun, 06 Apr 2025 21:15:16 +0000 https://status.sievedata.com/#90d3d1e2956ce70d92e340ff83bcaff54dba47e0670f1f6012d9ccdecdc1ba1c L4 recovered L4 went down https://status.sievedata.com/ Sun, 06 Apr 2025 19:28:17 +0000 https://status.sievedata.com/#90d3d1e2956ce70d92e340ff83bcaff54dba47e0670f1f6012d9ccdecdc1ba1c L4 went down 1 in 20 API requests are hanging or failing with 500s https://status.sievedata.com/incident/542149 Sun, 06 Apr 2025 19:00:00 -0000 https://status.sievedata.com/incident/542149#5432a7de5e0f18673014e6de3006239e06c3957f89bc4feae54d916bee3560a7 . File servers overwhelmed causing dubbing, autocrop slowdowns https://status.sievedata.com/incident/542148 Sat, 05 Apr 2025 05:00:00 -0000 https://status.sievedata.com/incident/542148#f6d2826042b7d88cc82f26e6f514ae3945555d48900703fd4e964ca6ba95847d We've resolved the issue. Full RCA can be viewed here: https://docs.google.com/document/d/18Ch92tWJbYcSqJa88bKlvPaUCGN-KrckiHtf7TgKn_U/edit?tab=t.0 API recovered https://status.sievedata.com/ Sat, 05 Apr 2025 01:38:00 +0000 https://status.sievedata.com/#db309d9bfb2ae5c7410adefd579b8828dce785fb41960ef778249fee307e3b77 API recovered API went down https://status.sievedata.com/ Sat, 05 Apr 2025 01:32:55 +0000 https://status.sievedata.com/#db309d9bfb2ae5c7410adefd579b8828dce785fb41960ef778249fee307e3b77 API went down L4 recovered https://status.sievedata.com/ Tue, 01 Apr 2025 06:20:22 +0000 https://status.sievedata.com/#f8395934498214447fa604546fda898d6ac102f219ac00c90fe4d9c4c3501af0 L4 recovered L4 went down https://status.sievedata.com/ Tue, 01 Apr 2025 06:14:25 +0000 https://status.sievedata.com/#f8395934498214447fa604546fda898d6ac102f219ac00c90fe4d9c4c3501af0 L4 went down L4 recovered https://status.sievedata.com/ Sun, 30 Mar 2025 20:24:28 +0000 https://status.sievedata.com/#aa0b7ff6af82f739799b8878c4e328446ae053ceaaaa75b5e7ad764b7044ba58 L4 recovered L4 went down https://status.sievedata.com/ Sun, 30 Mar 2025 20:17:28 +0000 https://status.sievedata.com/#aa0b7ff6af82f739799b8878c4e328446ae053ceaaaa75b5e7ad764b7044ba58 L4 went down L4 recovered https://status.sievedata.com/ Sat, 29 Mar 2025 20:03:29 +0000 https://status.sievedata.com/#eb73627558f3dbacdd8d2c71c21f32250fe0f9248af9f5d446d81832658f52dd L4 recovered L4 went down https://status.sievedata.com/ Sat, 29 Mar 2025 19:55:30 +0000 https://status.sievedata.com/#eb73627558f3dbacdd8d2c71c21f32250fe0f9248af9f5d446d81832658f52dd L4 went down A100 recovered https://status.sievedata.com/ Fri, 28 Mar 2025 21:30:26 +0000 https://status.sievedata.com/#05b102f22199cf710a020d7ac0e79b4dd018a04461826cf0bc4e9d791ee91e88 A100 recovered A100 went down https://status.sievedata.com/ Fri, 28 Mar 2025 21:22:25 +0000 https://status.sievedata.com/#05b102f22199cf710a020d7ac0e79b4dd018a04461826cf0bc4e9d791ee91e88 A100 went down L4 recovered https://status.sievedata.com/ Thu, 27 Mar 2025 16:08:23 +0000 https://status.sievedata.com/#0d84004c77480ac538a04dd852fae1cc74112ef3ec6a4c1ed029d2ef42d97fd0 L4 recovered L4 went down https://status.sievedata.com/ Thu, 27 Mar 2025 16:02:22 +0000 https://status.sievedata.com/#0d84004c77480ac538a04dd852fae1cc74112ef3ec6a4c1ed029d2ef42d97fd0 L4 went down L4 recovered https://status.sievedata.com/ Thu, 27 Mar 2025 14:41:21 +0000 https://status.sievedata.com/#8744e7e8f880c590816844205203a37dba09f2fd7778026ded52290b22efd68b L4 recovered L4 went down https://status.sievedata.com/ Thu, 27 Mar 2025 14:05:36 +0000 https://status.sievedata.com/#8744e7e8f880c590816844205203a37dba09f2fd7778026ded52290b22efd68b L4 went down L4 recovered https://status.sievedata.com/ Wed, 26 Mar 2025 15:49:22 +0000 https://status.sievedata.com/#c8c826d0ce6d64831e25fc27d7508a5c55050a9767343ffb295a1d6bc8ad7244 L4 recovered L4 went down https://status.sievedata.com/ Wed, 26 Mar 2025 15:43:25 +0000 https://status.sievedata.com/#c8c826d0ce6d64831e25fc27d7508a5c55050a9767343ffb295a1d6bc8ad7244 L4 went down L4 recovered https://status.sievedata.com/ Wed, 26 Mar 2025 15:30:25 +0000 https://status.sievedata.com/#8fc3c5c14fc6574a112c2452d1c1736a9d7dae9ed9f652657134e34f90764e26 L4 recovered L4 went down https://status.sievedata.com/ Wed, 26 Mar 2025 15:21:33 +0000 https://status.sievedata.com/#8fc3c5c14fc6574a112c2452d1c1736a9d7dae9ed9f652657134e34f90764e26 L4 went down sieve/youtube-downloader suffering long queue times, several other functions are slow to process. https://status.sievedata.com/incident/528253 Thu, 13 Mar 2025 14:10:00 -0000 https://status.sievedata.com/incident/528253#704caaf335d17dad8e9aabdeda7c96f18c95be19d8981392c7b81ef970525ca1 We've root caused and fixed the issue, the full RCA is written below: -------- Root Cause Analysis (RCA) - March 13, 2025 Incident Summary: On March 13, 2025, between 2:30 AM - 5:30 AM PST, a large push of jobs by our internal team to the sieve/youtube-downloader function caused a queue buildup, stalling customer requests to this function. This overwhelmed the servers responsible for video file CRUD operations, affecting multiple functions that generate and output video files. The issue was mitigated by 7:00 AM PST after manual intervention. Timeline: 2:30 AM PST - Internal team initiated a large push of jobs to sieve/youtube-downloader. 3:00 AM PST - Queue buildup began, impacting file handling servers. 4:00 AM PST - Degradation in video file processing observed. 5:30 AM PST - Alerts triggered, issue identified as an internal push causing excessive load. 6:00 AM PST - Large internal push was removed from the queue to allow customer jobs to process. 7:00 AM PST - Additional manual scaling applied to video file storage services; system returned to normal operation. Root Cause: A sudden and unexpected influx of jobs from an internal team overloaded the queue, leading to excessive demand on video file CRUD operations. The scaling mechanism for video file storage servers was not aggressive enough to handle the spike in demand. Alerts were delayed as the load originated from an internal push rather than customer requests. Resolution & Mitigation: Queue Separation: Segregating internal and customer job queues to prevent internal pushes from affecting customer processing. This has already been implemented. Improved Scaling: Implementing more aggressive auto-scaling policies for video file storage servers to handle sudden spikes. Will be implemented by March 25th. Enhanced Monitoring & Alerting: Refining alerting mechanisms to detect large internal job influxes earlier. This is done. Setting up specific monitoring for queue buildup to ensure proactive mitigation. This is done. Internal Process Changes: Implementing guidelines for internal teams to coordinate with infrastructure teams before large job pushes. This is done. Conclusion: This incident on March 13, 2025, was caused by an internal job push overwhelming video file handling services. While the issue was resolved through manual intervention, long-term mitigations including queue separation, improved scaling, and better monitoring are being implemented to prevent recurrence. sieve/youtube-downloader suffering long queue times, several other functions are slow to process. https://status.sievedata.com/incident/528253 Thu, 13 Mar 2025 12:34:00 -0000 https://status.sievedata.com/incident/528253#da2f92a3ad155b90bc3d410370bdbae58016483578bec9896192ef6723d880d7 We're looking into why this is happening, will keep you posted. We believe this started at around 2:30 - 3 am PDT. Jobs Unable to Process https://status.sievedata.com/incident/509152 Fri, 07 Feb 2025 12:19:00 -0000 https://status.sievedata.com/incident/509152#18bd350b295cc4a3717b40d9cf0d9e8f92d134b1a9f6e1e8a11bad2943ffc2f0 Root Cause Analysis (RCA) Incident Summary: On [date/time of the incident], our services experienced a disruption due to a database service undergoing automatic maintenance. This maintenance reset the DNS configuration, causing our systems to lose connection to the database. The issue persisted because the affected services did not reestablish the connection properly. Additionally, certain users experienced degraded performance for approximately 3 hours following the incident. These users had concurrency limits in place and continued pushing jobs at their normal throughput levels. Due to the disruption, a backlog of jobs built up, and our systems did not automatically increase their concurrency limits to help process the queue more efficiently. Resolution: Our team quickly identified the root cause of the database connection issue, restored service functionality, and worked to process the backlog of jobs. Preventative Actions: To prevent similar incidents in the future: We are transitioning automated maintenance windows for critical database services to manual scheduling to minimize unexpected changes. We are enhancing our internal service mechanisms to better handle connection reestablishment in the event of similar disruptions. We are implementing improvements to automatically adjust concurrency limits during recovery periods to ensure backlogs are cleared more quickly. We sincerely apologize for the inconvenience caused and appreciate your understanding as we take these steps to ensure greater reliability and performance. If you have further questions or concerns, please don't hesitate to reach out to our support team. Thank you for your continued trust. Jobs Unable to Process https://status.sievedata.com/incident/509152 Fri, 07 Feb 2025 11:15:00 -0000 https://status.sievedata.com/incident/509152#34a18d21bfa25205026a438480c10039486fd98b2998f7d079868d0a7ca75724 Our job processing and monitoring infrastructure is down, we are working on resolving it urgently. API is unavailable https://status.sievedata.com/incident/473171 Fri, 06 Dec 2024 00:09:00 -0000 https://status.sievedata.com/incident/473171#72a6fb0187f760a4f78fa19a6a91ad2d20437fcab2e7552539591b24c26cb1c5 The issue is resolved. It was due to an error caused by our DB provider when performing a standard DB operation. We are contacting them to ensure this never happens again, and apologize for any inconvenience caused. API is unavailable https://status.sievedata.com/incident/473171 Fri, 06 Dec 2024 00:09:00 -0000 https://status.sievedata.com/incident/473171#72a6fb0187f760a4f78fa19a6a91ad2d20437fcab2e7552539591b24c26cb1c5 The issue is resolved. It was due to an error caused by our DB provider when performing a standard DB operation. We are contacting them to ensure this never happens again, and apologize for any inconvenience caused. API is unavailable https://status.sievedata.com/incident/473171 Fri, 06 Dec 2024 00:09:00 -0000 https://status.sievedata.com/incident/473171#72a6fb0187f760a4f78fa19a6a91ad2d20437fcab2e7552539591b24c26cb1c5 The issue is resolved. It was due to an error caused by our DB provider when performing a standard DB operation. We are contacting them to ensure this never happens again, and apologize for any inconvenience caused. API is unavailable https://status.sievedata.com/incident/473171 Thu, 05 Dec 2024 23:55:00 -0000 https://status.sievedata.com/incident/473171#5850dcab35bc40b94c96af09cb3a25bf6fe83f72d5358742c84580d0bf930032 Our API was down due to a failure with our DB provider. We are working urgently to fix this and will provide an update shortly API is unavailable https://status.sievedata.com/incident/473171 Thu, 05 Dec 2024 23:55:00 -0000 https://status.sievedata.com/incident/473171#5850dcab35bc40b94c96af09cb3a25bf6fe83f72d5358742c84580d0bf930032 Our API was down due to a failure with our DB provider. We are working urgently to fix this and will provide an update shortly API is unavailable https://status.sievedata.com/incident/473171 Thu, 05 Dec 2024 23:55:00 -0000 https://status.sievedata.com/incident/473171#5850dcab35bc40b94c96af09cb3a25bf6fe83f72d5358742c84580d0bf930032 Our API was down due to a failure with our DB provider. We are working urgently to fix this and will provide an update shortly A few jobs stuck on queued and processing. https://status.sievedata.com/incident/466079 Fri, 22 Nov 2024 18:33:00 -0000 https://status.sievedata.com/incident/466079#afd75e8b738ed25ad1a1d262e282b6f30629576cf7811109da52792512731cd7 This has been resolved, will post a more detailed RCA soon. A few jobs stuck on queued and processing. https://status.sievedata.com/incident/466079 Fri, 22 Nov 2024 18:30:00 -0000 https://status.sievedata.com/incident/466079#aaa8b198e11df3ce92ab3ee5183df078aca58bd96c8a8ea6df7f3957cdbf2f49 We've noticed <30 jobs total that were stuck on either queued or processing. This was due to an internal service maintenance that lasted a few seconds. Users should not have been billed. Jobs stuck on queued https://status.sievedata.com/incident/459279 Mon, 11 Nov 2024 03:30:00 -0000 https://status.sievedata.com/incident/459279#6ebfe059504ea208e0dea682269700bb17a613bf842e3bfc2eede95b31b27182 Issue was resolved Jobs stuck on queued https://status.sievedata.com/incident/459279 Mon, 11 Nov 2024 02:30:00 -0000 https://status.sievedata.com/incident/459279#cdffa4147228b0a8daaffb223fd8bdc35a02f21b8f0750ce113bbc1011827103 We are actively investigating an issue where jobs were not processing between 6:30 and 7:30 PM PDT Sunday Nov 10th. This issue has been resolved and jobs appear to be running. Jobs involving sieve.File objects are hanging and occasionally returning no outputs https://status.sievedata.com/incident/447715 Sun, 20 Oct 2024 18:00:00 -0000 https://status.sievedata.com/incident/447715#bd1f1682223f0e9c98a85ea13b0b89ae7602d332f3702bb0716e0147c459bdb2 We identified the root cause of the issue, all jobs submitted after 11 am will work. The issue was due to a massive burst of jobs which backlogged a service that uploads files during job completion. We've resolved this by allocating more resources to the affected services to allow them to scale much higher than before. We will reimburse all affected jobs, including some old jobs which are continuing to hang. Jobs involving sieve.File objects are hanging and occasionally returning no outputs https://status.sievedata.com/incident/447715 Sun, 20 Oct 2024 17:40:00 -0000 https://status.sievedata.com/incident/447715#65f4be49a978e06d581d58ea26a97fb6c8feda320e94bdd35c8010ecedd4fcce We are experiencing an issue with the high load this morning where sieve File objects are hanging in the backend. We will update with a resolution as soon as possible, and will reimburse all affected jobs Dashboard shows "No outputs found" for certain outputs https://status.sievedata.com/incident/433749 Mon, 23 Sep 2024 23:22:00 -0000 https://status.sievedata.com/incident/433749#f55b1961019c464052c0d594863252a581aa24759cf2e4e4f2862641a2274d98 This is resolved for all new jobs. Older broken jobs from the past few hours will unfortunately still be broken. We will reimburse all broken jobs during the next billing period. Dashboard shows "No outputs found" for certain outputs https://status.sievedata.com/incident/433749 Mon, 23 Sep 2024 23:22:00 -0000 https://status.sievedata.com/incident/433749#f55b1961019c464052c0d594863252a581aa24759cf2e4e4f2862641a2274d98 This is resolved for all new jobs. Older broken jobs from the past few hours will unfortunately still be broken. We will reimburse all broken jobs during the next billing period. Dashboard shows "No outputs found" for certain outputs https://status.sievedata.com/incident/433749 Mon, 23 Sep 2024 23:22:00 -0000 https://status.sievedata.com/incident/433749#f55b1961019c464052c0d594863252a581aa24759cf2e4e4f2862641a2274d98 This is resolved for all new jobs. Older broken jobs from the past few hours will unfortunately still be broken. We will reimburse all broken jobs during the next billing period. Dashboard shows "No outputs found" for certain outputs https://status.sievedata.com/incident/433749 Mon, 23 Sep 2024 23:04:00 -0000 https://status.sievedata.com/incident/433749#dab0dc8e4c2184310c6acd369715dc63d4f12e1641fa1524d8ecad512c07ea72 A recent push today caused certain jobs to show "No Outputs Found" in the dashboard upon completion. We are working on the fix now and it will be out in a few minuts Dashboard shows "No outputs found" for certain outputs https://status.sievedata.com/incident/433749 Mon, 23 Sep 2024 23:04:00 -0000 https://status.sievedata.com/incident/433749#dab0dc8e4c2184310c6acd369715dc63d4f12e1641fa1524d8ecad512c07ea72 A recent push today caused certain jobs to show "No Outputs Found" in the dashboard upon completion. We are working on the fix now and it will be out in a few minuts Dashboard shows "No outputs found" for certain outputs https://status.sievedata.com/incident/433749 Mon, 23 Sep 2024 23:04:00 -0000 https://status.sievedata.com/incident/433749#dab0dc8e4c2184310c6acd369715dc63d4f12e1641fa1524d8ecad512c07ea72 A recent push today caused certain jobs to show "No Outputs Found" in the dashboard upon completion. We are working on the fix now and it will be out in a few minuts Jobs stuck on processing https://status.sievedata.com/incident/417816 Fri, 23 Aug 2024 06:32:00 -0000 https://status.sievedata.com/incident/417816#e91067dc2bb4db11b97f5d5a54709fce9da541c23f862f04a546bdf04f32534f We have pushed a fix. This issue is resolved. Jobs stuck on processing https://status.sievedata.com/incident/417816 Wed, 21 Aug 2024 23:31:00 -0000 https://status.sievedata.com/incident/417816#a7d41e506ca000bd36c5f066ebe34d6f50466e73e3e62422f9ae3502eca7463c We noticed an issue were certain jobs were preempted but not resumed. We are pushing a fix for the issue. Users will not be charged for these jobs. Jobs with child jobs may sporadically hang https://status.sievedata.com/incident/372973 Wed, 22 May 2024 04:50:00 -0000 https://status.sievedata.com/incident/372973#e04467d8021edd21381ba9e20fbe80e7ae856293f8097f40260fa66961560970 We have identified and pushed a fix at 9:58pm PST so future jobs will not be affected, in addition with remedying many of the existing jobs. Customers will not be billed for any extra compute used during this time period. Jobs with child jobs may sporadically hang https://status.sievedata.com/incident/372973 Tue, 21 May 2024 19:43:00 -0000 https://status.sievedata.com/incident/372973#55151d8dc79b4eedef24f92ff514f85f761cab162e6416488471cb4ee16cd477 For jobs that called several child jobs, we noticed an issue starting at 12:43pm PST that would happen due to preemption-based retries in children not being properly caught in the parent.