To view uptime history for your instance of Canvas, click here.

Some users experiencing slowness, occasional error messages
Incident Report for Instructure
Resolved
Amazon Web Services and our DevOps team have restored performance to normal levels for all Canvas users. We apologize for the extended period of slowness you and your users encountered this morning.

We’ll work with AWS to prepare an incident report for Canvas admins at affected institutions. We’ll need information from AWS about the root cause of the problem in order to present an appropriately detailed report; for this reason, we will commit to provide the report by the end of the day on Wednesday rather than tomorrow. We’ll certainly provide it sooner, if possible.
Posted Nov 30, 2015 - 11:42 MST
Monitoring
Amazon Web Services has taken measures that are beginning to have a meaningful positive effect on performance for Canvas users. We’ll provide another update in 30 minutes; in the meanwhile, you should see steady improvement in load times.
Posted Nov 30, 2015 - 11:12 MST
Update
Our partners at Amazon Web Services have identified the cause of the AWS API issue causing Canvas slowness this morning. Two separate teams at AWS are working with our DevOps team on a short-term solution. We’ll provide another update in 30 minutes.
Posted Nov 30, 2015 - 10:25 MST
Update
Our DevOps team continues to work with Amazon Web Services on the root cause of the slowness users are experiencing in Canvas this morning. We will provide another update in 30 minutes.
Posted Nov 30, 2015 - 10:10 MST
Update
We apologize for the continuing slowness some users are experiencing while trying to use Canvas this morning. We continue to work with our partners at Amazon Web Services. We’ll provide another update in 30 minutes.
Posted Nov 30, 2015 - 09:33 MST
Update
Our DevOps team reports that they are beginning to see fewer API errors from Amazon Web Services when they attempt to scale up Canvas resources. This hasn’t translated into meaningfully better performance for users yet, but it’s a good sign. AWS continues to work on the underlying problem. We’ll provide another update in 30 minutes or sooner.
Posted Nov 30, 2015 - 09:00 MST
Update
Our DevOps team is still working with Amazon Web Services. AWS is implementing some configuration changes that should improve performance. We’ll provide another update in 30 minutes or sooner.
Posted Nov 30, 2015 - 08:24 MST
Update
We continue to work with our partners at Amazon Web Services on a solution to their API issue. We rely on the AWS API to add more resources to our infrastructure as user-demand increases each morning.

The impact of this issue has spread beyond the relatively small group of users initially affected; about half of Canvas users are currently seeing some degree of slowness and occasional error messages. We’ll provide another update in about 15 minutes. We sincerely apologize for the trouble!
Posted Nov 30, 2015 - 07:54 MST
Update
We’re still working with Amazon Web Services to correct the issue causing slowness for some Canvas users. We’ll provide another update in 15 minutes and hope to have good news to share at that time.
Posted Nov 30, 2015 - 07:26 MST
Update
Our DevOps team continues to work with Amazon Web Services on an issue with their APIs that is causing Canvas slowness for some users. We apologize for the issue this morning! This problem is affecting <8% of all Canvas users, and we’ll get things back up to speed for all users ASAP.
Posted Nov 30, 2015 - 07:09 MST
Identified
Some of your users are experiencing slowness and occasional error messages in Canvas this morning. The root of the issue is with Amazon Web Services’ APIs, which we rely upon to scale Canvas capacity to match user needs. Our DevOps team is working with AWS to find a solution while AWS addresses the problem on their side. We’ll provide an update in 15 minutes.
Posted Nov 30, 2015 - 06:52 MST