Order posts by limited to posts

Monday
Details
13 Apr 17:29:53
We handle SMS, both outgoing from customers, and incoming via various carriers, and we are now linking in once again to SMS with mobile voice SIM cards. The original code for this is getting a tad worn out, so we are working on a new system. It will have ingress gateways for the various ways SMS can arrive at us, core SMS routing, and then output gateways for the ways we can send on SMS. The plan is to convert all SMS to/from standard GSM 03.40 TPDUs. This is a tad technical I know, but it will mean that we have a common format internally. This will not be easy as there are a lot of character set conversion issues, and multiple TPDUs where concatenation of texts is used. The upshot for us is a more consistent and maintainable platform. The benefit for customers is more ways to submit and receive text messages, including using 17094009 to make an ETSI in-band modem text call from suitable equipment (we think gigasets do this). It also means customers will be able to send/receive texts in a raw GSM 03.40 TPDU format, which will be of use to some customers. It also makes it easier for us to add other formats later. There will be some changes to the existing interfaces over time, but we want to keep these to a minimum, obviously.
Started Monday
Expected close 1 May

11 Apr 15:50:28
Details
11 Apr 15:53:42
There is a problem with the C server and it needs to be restarted again after the maintenance yesterday evening. We are going to do this at 17:00 as we need it to be done as soon as possible. Sorry for the short notice.
Started 11 Apr 15:50:28

11 Apr 10:18:57
Details
9 Apr 20:02:43

Question: Does the Heartbleed bug affect any AAISP servers?

The answer is that no servers are affected that hold customer data or our aa.net.uk SSL certificate secret key. The control and billing pages, email servers and our ticketing system are all running an unaffected version of openssl.

This doesn't mean that we're running out of date software; we still apply backported security patches to those boxes and plan suitable upgrades in the long term.

Unfortunately, however, we had a single test box that was both affected by the bug and held the CAcert signed certificate that we use for our email services. We are therefore going to revoke that certificate and replace the secret key.

The chances of the key having been leaked are tiny, but we think it is worth this measure as a precaution.

Customers who do not have the CAcert root cert installed may see warnings when they connect to our email services. There is more information here: http://aa.net.uk/cacert.html

Please contact support if you have any questions.

Started 9 Apr 19:40:20 by AAISP Staff

8 Apr 16:58:41
Details
8 Apr 16:58:41
Some lines on the BT LEITH exchange have gone down. BT are aware and are investigating at the moment.
Started 8 Apr 16:30:20 by Customer report
Update was expected 8 Apr 17:40:20

7 Apr 13:45:09
Details
7 Apr 13:52:31
We will be carrying out some maintenance on our 'C' SIP server outside office hours. It will cause disruption to calls, but is likely only to last a couple of minutes and will only affect calls on the A and C servers. It will not affect calls on our "voiceless" SIP platform or SIP2SIM. We will do this on Thursday evening at around 22:30. Please contact support if you have any questions.
Update
10 Apr 23:19:59
Completed earlier this evening.
Started 7 Apr 13:45:09
Previously expected 10 Apr 22:45:00

3 Apr 15:57:14
Details
01 Nov 2013 15:05:00
We have identified an issue that appears to be affecting some customers with FTTC modems. The issue is stupidly complex, and we are still trying to pin down the exact details. The symptoms appear to be that some packets are not passing correctly, some of the time.

Unfortunately one of the types of packet that refuses to pass correctly are FireBrick FB105 tunnel packets. This means customers relying on FB105 tunnels over FTTC are seeing issues.

The work around is to remove the ethernet lead to the modem and then reconnect it. This seems to fix the issue, at least until the next PPP restart. If you have remote access to a FireBrick, e.g. via WAN IP, and need to do this you can change the Ethernet port settings to force it to re-negotiate, and this has the same effect - this only works if directly connected to the FTTC modem as the fix does need the modem Ethernet to restart.

We are asking BT about this, and we are currently assuming this is a firmware issue on the BT FTTC modems.

We have confirmed that modems re-flashed with non-BT firmware do not have the same problem, though we don't usually recommend doing this as it is a BT modem and part of the service.

Update
04 Nov 2013 16:52:49
We have been working on getting more specific information regarding this, we hope to post an update tomorrow.
Update
05 Nov 2013 09:34:14
We have reproduced this problem by sending UDP packets using 'Scapy'. We are doing further testing today, and hope to write up a more detailed report about what we are seeing and what we have tested.
Update
05 Nov 2013 14:27:26
We have some quite good demonstrations of the problem now, and it looks like it will mess up most VPNs based on UDP. We can show how a whole range of UDP ports can be blacklisted by the modem somehow on the next PPP restart. It is crazy. We hope to post a little video of our testing shortly.
Update
05 Nov 2013 15:08:16
Here is an update/overview of the situation. (from http://revk.www.me.uk/2013/11/bt-huawei-fttc-modem-bug-breaking-vpns.html )

We have confirmed that the latest code in the BT FTTC modems appears to have a serious bug that is affecting almost anyone running any sort of VPN over FTTC.

Existing modems seem to be upgrading, presumably due to a roll out of new code in BT. An older modem that has not been on-line a while is fine. A re-flashed modem with non-BT firmware is fine. A working modem on the line for a while suddenly stopped working, presumably upgraded.

The bug appears to be that the modem manages to "blacklist" some UDP packets after a PPP restart.

If we send a number of UDP packets, using various UDP ports, then cause PPP to drop and reconnect, we then find that around 254 combinations of UDP IP/ports are now blacklisted. I.e. they no longer get sent on the line. Other packets are fine.

Sending 500 different packets, around 254 of them will not work again after the PPP restart. It is not actually the first or last 254 packets, some in the middle, but it seems to be 254 combinations. They work as much as you like before the PPP restart, and then never work after it.

We can send a batch of packets, wait 5 minutes, PPP restart, and still find that packets are now blacklisted. We have tried a wide range of ports, high and low, different src and dst ports, and so on - they are all affected.

The only way to "fix" it, is to disconnect the Ethernet port on the modem and reconnect. This does not even have to be long enough to drop PPP. Then it is fine until the next PPP restart. And yes, we have been running a load of scripts to systematically test this and reproduce the fault.

The problem is that a lot of VPNs use UDP and use the same set of ports for all of the packets, so if that combination is blacklisted by the modem the VPN stops after a PPP restart. The only way to fix it is manual intervention.

The modem is meant to be an Ethernet bridge. It should not know anything about PPP restarting or UDP packets and ports. It makes no sense that it would do this. We have tested swapping working and broken modems back and forth. We have tested with a variety of different equipment doing PPPoE and IP behind the modem.

BT are working on this, but it is a serious concern that this is being rolled out.
Update
12 Nov 2013 10:20:18
Work on this in still ongoing... We have tested this on a standard BT retail FTTC 'Infinity' line, and the problem cannot be reproduced. We suspect this is because when the PPP re-establishes a different IP address is allocated each time, and whatever is session tracking does not match the new connection.
Update
12 Nov 2013 11:08:17

Here is an update with some a more specific explanation as to what the problem we are seeing is:

On WBC FTTC, we can send a UDP packet inside the PPP and then drop the PPP a few seconds later. After the PPP re-establishes, UDP packets with the same source and destination IP and ports won't pass; they do not reach the LNS at the ISP.

Further to that, it's not just one src+dst IP and port tuple which is affected. We can send 254 UDP packets using different src+dest ports before we drop the PPP. After it comes back up, all 254 port combinations will fail. It is worth noting here that this cannot be reproduced on an FTTC service which allocates a dynamic IP which changes each time PPP re-established.

If we send more than 254 packets, only 254 will be broken and the others will work. It's not always the first 254 or last 254, the broken ones move around between tests.

So it sounds like the modem (or, less likely, something in the cab or exchange) is creating state table entries for packets it is passing which tie them to a particular PPP session, and then failing to flush the table when the PPP goes down.

This is a little crazy in the first place. It's a modem. It shouldn't even be aware that it's passing PPPoE frames, let along looking inside them to see that they are UDP.

This only happens when using an Openreach Huawei HG612 modem that we suspect has been recently remotely and automatically upgraded by Openreach in the past couple of months. Further - a HG612 modem with the 'unlocked' firmware does not have this problem. A HG612 modem that has probably not been automatically/remotely upgraded does not have this problem.

Side note: One theory is that the brokenness is actually happening in the street cab and not the modem. And that the new firmware in the modem which is triggering it has enabled 'link-state forwarding' on the modem's Ethernet interface.

Update
27 Nov 2013 10:09:42
This post has been a little quiet, but we are still working with BT/Openreach regarding this issue. We hope to have some more information to post in the next day or two.
Update
27 Nov 2013 10:10:13
We have also had reports from someone outside of AAISP reproducing this problem.
Update
27 Nov 2013 14:19:19
We have spent the morning with some nice chaps from Openreach and Huawei. We have demonstrated the problem and they were able to do traffic captures at various points on their side. Huawei HQ can now reproduce the problem and will investigate the problem further.
Update
28 Nov 2013 10:39:36
Adrian has posted about this on his blog: http://revk.www.me.uk/2013/11/bt-huawei-working-with-us.html
Update
13 Jan 14:09:08
We are still chasing this with BT.
Update
3 Apr 15:47:59
We have seen this affect SIP registrations (which use 5060 as the source and target)... Customers can contact us and we'll arrange a modem swap.
Resolution BT are testing a fix in the lab and will deploy in due course, but this could take months. However, if any customers are adversely affected by this bug, please let us know and we can arrange for BT to send a replacement ECI modem instead of the Huawei modem. Thank you all for your patience.
Started 25 Oct 2013

31 Mar 17:00:00
Details
31 Mar 15:10:58
There are some router upgrades during this week. We'll roll out changes gradually and disruption should be minimal (few seconds maybe but ideally no outage at all). We'll roll out LNS upgrades as well over night.
Started 31 Mar 17:00:00
Previously expected 7 Apr

22 Mar 07:36:41
Details
22 Mar 07:36:41
We have started to see yet more congestion on BT lines last night. This looks again a bit like a link aggregation issue (where one leg of a multiple link trunk within BT is full). The patten is not as obvious this time. Looking at the history we can see that some of the affected lines have had slight loss in the evenings. We did not spot this with our tools because of the rather odd pattern. Obviously we are trying to get this sorted with BT, but we are pleased to confirm that BT are actually providing more data now that shows where each circuit will use network components within their network. We plan to integrate this soon so that we can correlate some of these newer congestion issues and point BT in the right direction more quickly.
Started 21 Mar 18:00:00

21 Mar 10:19:24
Details
11 Mar 10:11:55
We are seeing multiple exchanges with packet loss over BT wholesale. We are chasing BT on this and will update as and when we have updates. GOODMAYES CANONBURY HAINAULT SOUTHWARK LOUGHTON HARLOW NINE ELMS UPPER HOLLOWAY ABERDEEN DENBURN HAMPTON INGREBOURNE COVENTRY 21CN-BRAS-RED6-SF
Update
14 Mar 12:49:28
This has now been escalated to the next level for further investigation.
Update
17 Mar 15:42:38
BT are now raising faults on each Individual exchange.
Update
21 Mar 10:19:24
Below are the exchanges/RAS which has been fixed by capacity upgrades. We are hoping for the remanding four exchanges to be fixed in the next few days.
HAINAULT
SOUTHWARK
LOUGHTON
HARLOW
ABERDEEN DENBURN
HAMPTON
INGREBOURNE
GOODMAYERS
RAS 21CN-BRAS-RED6-SF
Update
21 Mar 15:52:45
COVENTRY should be resolved later this evening when a new link is installed between Nottingham and Derby. CANONBURY is waiting for CVLAN moves that begin 19/03/2014 and will be competed 01/04/2014.
Update
25 Mar 10:09:23
CANONBURY - Planned Engineering works have taken place on 19.3.14, and there are three more planned 25.3.14 , 26.3.14 and 1.4.14.
COVENTRY - Is now fixed
NINE ELMS and UPPER HOLLOWAY- Still suffering from packet loss and BT are investigating further.
Update
2 Apr 15:27:11
BT are still investigating congestion on Canonbury, Nine Elms and Upper Holloway.
Broadband Users Affected 1%
Started 9 Mar 10:08:25 by AAISP Pro Active Monitoring Systems

20 Mar 11:10:57
Details
17 Feb 20:13:09
We are seeing packet loss at peak times on some lines on the Crouch End exchange. It's a small number of customers, and it looks like a congested SVLAN. This has been reported to BT.
Update
18 Feb 10:52:26
Initially BT were unable to see any problem, their monitoring was not showing any congestion and they wanted us to report individual line faults rather than this being dealt as a specific BT network problem. However we have spoken to another ISP who confirms the problem. BT have now opened an Incident and will be investigating.
Update
18 Feb 11:12:47
We have passed all our circuit details and graphs to proactive to investigate.
Update
18 Feb 16:31:17
TSO will investigate overnight
Update
20 Feb 10:15:02
No updates from TSO, proactive are chasing.
Update
27 Feb 13:24:38
There is still congestion, we are chasing BT again.
Update
28 Feb 09:34:50
Appears the issue is on the MSE router. Lines connected to the MSE are due to be migrated on 21st March however BT are hoping to get this done by 21th March.
Broadband Users Affected 0.10%
Started 17 Feb 20:10:29

14 Oct 2013 10:52:12
Details
11 Oct 2013 17:12:54
We've fixed a problem with line up/down SMS/email/tweet notifications. This may result in customers receiving old notifications, so please disregard any such historical notifications that you receive.
Update
12 Oct 2013 13:24:43
It looks like the notification system is still a bit broken, with some customers receiving many duplicate notices. We've disabled notifications entirely until we've fixed this. Sorry for any inconvenience.
Update
14 Oct 2013 10:52:12
The notification problem should be fixed now, but please do let support know if you receive any erroneous updates.
Started 11 Oct 2013 17:09:34 by AAISP Staff

25 Sep 2013
Details
18 Sep 2013 16:32:41
We have received notification that Three's network team will be carrying out maintenance on one of the nodes that routes our data SIM traffic between 00:00 and 06:00 on Weds 25th September. Some customers may notice a momentary drop in connections during this time as any SIMs using that route will disconnect when the link is shut down. Any affected SIMs will automatically take an alternate route when they try and reconnect. Unfortunately, we have no control over the timing of this as it is dependent on the retry strategy of your devices. During the window, the affected node will be offline therefore SIM connectivity should be considered at risk throughout.
Started 25 Sep 2013

Today 10:44:35
Details
Today 10:44:35

This is not related to the Control Pages directly, except that various pages do link to our Wiki for further information.

The software behind wiki.aa.net.uk has been upgraded this morning. If customers notice any problems then please do let us know. Thankyou!

Started Today 10:42:00

Monday 20:10:20
Details
Monday 20:10:20
Work on the billing system has resulted in interim data and call usage bills today. These will have normal payment terms, and obviously mean lower charges on your next bill. If any customers need extra time to pay because these are interim bills, please contact accounts who can extend the terms on this occasion.
Update
Monday 20:20:46
We have extended the terms on these interim invoices by two weeks for those otherwise due this month.
Update
Yesterday 07:55:13
For the technically minded - what we were attempting to do was sort an issue where a SIM gets suspended on the date of the last bill, and so the suspend date and the billed-to date are the same but some usage has not been billed for that day. We had customer complaints about this and the fact that up to a day's worth of calls and usage were not billed. The fix was a tad overzealous. We think we have this sorted now though. I do apologise for any concern this has caused.
Started Monday 20:09:03
Previously expected Yesterday 00:09:03

12 Apr 08:42:39
Details
11 Apr 08:50:44
Customers using our VoIP services will be aware that we have reserved 10 IPv4 addresses for all of our VoIP control traffic. For our current platform "Voiceless" these are also used for the media (RTP) traffic. This makes firewalling simpler, etc. Customers using asterisk will know that the config for these can be somewhat complex, listing 10 hostnames each for IPv4 and IPv6 as the way asterisk works is to look up an IP for a hostname, pick the first, and check that against the request IP address. Asterisk really needs fixing. For voiceless we have been using two addresses 81.187.30.111 and 81.187.30.112 as well as IPv6 addresses 2001:8b0:0:30::5060::1 and 2001:8b0:0:30::5060::2. We recently tried a slight change on the IPv6 addresses, and this caused some issues. What we are planning to do now, for the "voiceless" call servers is use two addresses per server for each of IPv4 and IPv6. These shall be 81.187.30.111 and 81.187.30.113 for the A server and 81.187.30.112 and 81.187.30.114 for the B server. You do not need to know if A or B. The additional two addresses will be used as "source" addresses for any request from these servers to you that need authentication. This allows asterisk to be configured separately for authenticated and unauthenticated requests. Requests may also come from other addresses within the published block when test servers are used, etc. We are also making the corresponding change to IPv6 addresses using :1 and :3 for the A server and :2 and :4 for the B server. We may adjust which IPs are which server at a future date as we also have to consider how we expand beyond the two servers in use currently. These IPs will be accessible via DNS as a.voiceless.aa.net.uk and a.auth.voiceless.aa.net.uk, and so on. If your existing asterisk config is working as per the recommendations, no changes will be needed. The wiki will be updated to explain how you can use these changes.
Update
12 Apr 08:44:38

We have made changes this morning - these are slightly different to planned so as to allow for future expansion.

A.voiceless 81.187.30.111 and 2001:8b0:0:30::5060:1

A.Auth.voiceless 81.187.30.112 and 2001:8b0:0:30::5060:2

B.voiceless 81.187.30.113 and 2001:8b0:0:30::5060:3

B.Auth.voiceless 81.187.30.114 and 2001:8b0:0:30::5060:4

I believe all of the necessary DNS changes have been made to match and allow existing configs to work.

Update
12 Apr 17:56:53
We have seen some issues with calls to registered phones coming from the auth'd address and this is being looked in to now.
Update
12 Apr 18:46:08
We tracked it down - and was a slight error in this mornings config. IPv4 was all coming from the auth address even if no authentication required. This would have had an impact on some calls to some devices not working properly during today.
Started 12 Apr
Closed 12 Apr 08:42:39
Previously expected Monday

11 Apr 14:43:54
Details
11 Apr 14:43:10
Just a minor update - we've added a 'Notes' field to the login details page for staff and customer use. This can be used for any additional information... To find this: log in to the control pages, and click your login.
Started 11 Apr 14:42:54

6 Apr 11:43:17
Details
6 Apr 11:43:17
We are working on changes to the source IP addresses used for SIP messages we send where we have authentication details to provide. The reason is that asterisk boxes cannot easily be configured to tell "peers" from "users" in order to decide if a challenge is needed. This has caused problems for customers using VoIP unauthenticated from our IPs and SIP2SIM authenticated as a user. The first stage of the change has been done today, and involves the source IPv6 addresses. For all authenticated IPv6 messages (i.e. those for which we expect a challenge) we are sending from 2001:8b0:0:30::5060:a000/116 whilst any unauthenticated are from 2001:8b0:0:30::5060::/116 As this is within the 2001:8b0:0:30::5060::/112 block we have advised for VoIP use, this should not need any firewall changes or config changes, but we are interested in any feedback on any issues encountered. The change for IPv4 will take a bit longer - as we are unsure if to try and free up space in the 81.187.30.110-119 range previously advised, or to allocate a new range for this. This only affects calls originating from the new "voiceless" servers, which includes all SIP2SIM, but affects all authenticated messages whether SIP2SIM or otherwise. Please let us know of any issues.
Started 6 Apr 11:00:00

13 Feb 16:00:00
Details
13 Feb 15:14:54
We are investigating a potential problem with our Maidenhead routers. This should be considered an "at risk" period. Fibre links based in Maidenhead will already have blipped briefly at about 14:55, but we will try to ensure that they do not lose connectivity again.
Started 13 Feb 14:55:00
Closed 13 Feb 16:00:00
Previously expected 13 Feb 16:00:00

3 Apr 15:04:47
Details
1 Apr 08:59:54
As you may be aware the office suffered a break-in in mid March, and as a result we've installed new computers. In a change of direction for A&A we have opted for Windows 8, and are pleased to report that going forward we will only be able to support Windows based devices. Linux and Apple support will be dropped from today. We are also replacing all of our routers with Cisco IOS 14.04.01 based devices so we can deploy Carrier Grade NAT.
Update
1 Apr 13:00:00
Happy April 1st! smiley
Started 1 Apr

3 Apr 12:26:40
Details
25 Mar 09:55:20

We are seeing customer routers being attacked this morning, which is causing them to drop. This was previously reported in the status post http://status.aa.net.uk/1877 where we saw that the attacks were affecting ZyXEL routers, as well as other makes.

Since that post we have updated the configuration of customer ZyXEL routers, where possible and these are no longer being affected. However, these attacks are affecting other types of routers.

We suggest that customers with lines that are dropping to check their router configuration and disable access to the router's web interface from the internet, or at least to change the the port used (eg one in the range of 1024-65535)

Please speak to Support for more information.

Update
28 Mar 10:13:13
This is happening again, do speak to suport if you need help changing the web interface settings.
Customers with ZyXELs can change the port from the control pages.
Started 25 Mar 09:00:40
Closed 3 Apr 12:26:40

3 Apr 11:20:01
Details
3 Apr 11:20:01

Customers are now able to add Pins to their line monitoring graphs.

We add 'Pins' to the graphs automatically when various things happen - for example when a Line Test is ran. Pins are also useful for adding time-related notes, for example, adding a Pin to point when the router was swapped over.

Customers are welcome to use this feature for their own use as well as pinpointing something for staff to see

More information on the wiki: http://wiki.aa.org.uk/CQM_Graphs#Pins


3 Apr 11:06:40
Details
3 Apr 11:06:40
We've added this new category to the Status page that we'll use to notify customers of changes relating to the Control Pages. When we add new features or need to change things around we will add a post here.
Started 3 Apr 10:54:00

1 Apr 10:00:00
Details
1 Apr 12:13:31
Some TalkTalk connected lines dropped at around 09:50 and reconnected a few minutes after. It looks like a connectivity problem between us and TalkTalk on one of our connections to them. We are investigating further.
Started 1 Apr 09:50:00
Closed 1 Apr 10:00:00

31 Mar 15:03:25
Details
31 Mar 09:40:40
Some TalkTalk line diagnostics (Signal graphs and line tests) as available from the Control Pages are not working at the moment. This is being looked in to.
Update
31 Mar 15:03:17
This is resolved. The TalkTalk side appears of have a bug relating to timezones.
Resolution This is resolved. The TalkTalk side appears of have a bug relating to timezones.
Started 31 Mar 09:00:00
Closed 31 Mar 15:03:25

30 Mar 17:47:44
Details
30 Mar 17:47:44
We have changed the "from" address used in INVITEs to use the target hostname rather than @voiceless.aa.net.uk so as to be consistent with the REGISTER messages and normal behaviour of SIP handsets. This may help simplify some configurations. We have moved REGISTER process from our test call server (z.voiceless) to our live call servers (a&b.voiceless). INVITE and REGISTER could still come from any of our call servers so please configure to allow the full range of IP addresses. INVITE can still come from a different server to REGISTER.
Started 30 Mar 17:44:53

20 Mar 11:17:21
Details
20 Mar 08:38:52
Customers will be seeing what looks like 'duplicated' usage reporting on the control for last night and this morning. This has been caused by a database migration that is taking longer than expected. The usage 'duplication' has been caused by usage reports being missed and so on subsequent hours the usage has been spread equally across missed hours.
This means that overall the usage reporting will be correct, but an individual hour will be incorrect.
This has also affected a few other related things such as the Line Colour states.
Update
20 Mar 11:17:55
Usage reporting is now back to normal.
Started 19 Mar 18:00:00
Closed 20 Mar 11:17:21

2 Mar 11:33:29
Details
1 Mar 04:24:02
Lines: 100% 21CN-REGION-GI-B dropped at 2014-03-01 04:22:17
We have advised BT
This is likely to have affected multiple internet providers using BT
Update
1 Mar 04:25:06
Lines: 100% 21CN-REGION-GI-B dropped again at 2014-03-01 04:23:21.
Broadband Users Affected 2%
Started 1 Mar 04:22:17 by AAISP automated checking
Closed 2 Mar 11:33:29
Cause BT

18 Mar 11:32:53
Details
18 Mar 11:32:53
We have removed the 'Services' hyperlink from our Accounts (billing) system that logs you into Clueless directly.
The alternative is for staff to set your '@a' login to be a Group login. This will then mean that your @a login will be able to see all services on your billing account.
Do email in if you'd like this set up.
Started 18 Mar 09:00:00

14 Mar 13:45:03
Details
14 Mar 08:25:52
Our offices suffered a break-in last night. Because of this, we may be slow to answer phones, etc, this morning. If it is taking a long time to get through by phone, please email instead.
Update
14 Mar 13:45:42
We're back in the office and mostly back to normal! Thank you for your patience this morning.
Closed 14 Mar 13:45:03

11 Mar 15:58:19
Details
11 Mar 15:58:19
See: http://aa.net.uk/news-2014-sip2sim.html The service consists of a normal SIM card which works in a normal mobile phone, but it makes the phone act like a SIP device on a SIP phone system or VoIP provider of your choice. This means your phone can simply be an office extension, like any other.
Started 11 Mar 15:00:00

11 Mar 15:54:57
Details
01 Mar 2013 12:26:15

As per http://status.aa.net.uk/apost.cgi?incident=1622 our Mobile Voice SIM service is being withdrawn at the end of April DATA SIMS are unaffected by this.

We do hope to offer a new service, maybe later in the year, that will work in a similar way - we have some possible candidates, but nothing we can set up yet to replace the current service.

Customers fall in to different categories...

1. Some SIMs will be SIP2SIP, meaning that the SIM has no number, but is used registered to some other device such as their PABX or asterisk box. These will simply stop. We cannot offer a replacement service.

2. Some SIMs will be on a normal (01/02/03) number. They will stop as a mobile but the number can be retained on normal £1/month terms and be directed as needed. The service allows SIP phones, and diverts (also ring) as required. you could, for example, divert to a new mobile number so incoming calls still work for them. There is a cost for the mobile divert at our normal rates (5p/min+VAT for mobiles now).

3. Some SIMs will be on one of our 07 numbers. These are rarer. They can keep the number. However, OFCOM rules require the call to have a wireless leg. This means we cannot simply send to SIP or a landline. What we have just changed is a system to allow these numbers to work if being diverted (also ring) to another 07 number. This will allow divert to another mobile (and so be within OFCOM rules). They will be the same £1/month service. Again, there is a cost to divert to mobile. Please do contact sales for further help as required.

Update
04 Mar 2013 09:37:24

The actual terminiation date of voice SIMs will be 30th April 2013 - NOT 31st March as previosly published - sorry for the confusion.

Update
11 Mar 15:54:57
...And it's back, new and re-launched! http://aa.net.uk/news-2014-sip2sim.html
Started 01 Mar 2013 12:00:00

11 Mar 09:32:42
Details
6 Mar 13:07:51

We have had a small number of reports from customers who have had the DNS settings on their routers altered. The IPs we are seeing set are 199.223.215.157 and 199.223.212.99 (there may be others)

This type of attack is called Pharming. In short, it means that any internet traffic could be redirected to servers controlled by the attacker.

There is more information about pharming on the following pages:

At the moment we are logging when customers try to accesses these IP addresses and we are then contacting the customers to make them aware.

To solve the problem we are suggesting that customers replace the router or speak to their local IT support.

Update
6 Mar 13:33:10
Changing the DNS settings back to auto, changing the administrator password and disabling WAN side access to the router may also prevent this from happening again.
Update
6 Mar 13:48:14
Also reported here: http://www.pcworld.com/article/2104380/
Resolution We have contacted the few affected customers.
Started 6 Mar 09:00:00
Closed 11 Mar 09:32:42

7 Mar 15:08:45
Details
7 Mar 15:10:59
Some broadbands lined blipped at 15:05. This was a result of one of our LNSs restarting. Lines are back online and we'll investigate the cause
Started 7 Mar 15:03:00
Closed 7 Mar 15:08:45

4 Mar 14:30:58
Details
4 Mar 14:30:58

We are pleased to confirm that we are extending the links to BT for broadband to a third gigabit hostlink. This means we will actually have six gigabit fibres to them allowing lots of headroom and redundancy. This should be seamless to customers but the LNSs known as "A", "B", "C", and "D" will have a new "E" and "F" added and we will run 5 of the 6 LNSs as "live" and one backup. We also have multiple gigabit links in to Talk Talk.

This will happen over the next few months and have planned work "at risk" announcements as necessary.

We are actually growing quite well now, and a lot of this has been put down to Baroness Howe mentioning us in The House of Lords recently. I'd really like to thank her for her support, even if unintentional. (see http://revk.www.me.uk/2014/01/mentioned-in-house-of-lords.html)

We have even put another person in to the sales team to handle the extra load.

Started 4 Mar 14:00:00

3 Mar 13:31:25
Details
17 Jan 16:13:23
It seems that BE/Sky are informing their customers that they can no longer have their public blocks of IPs on their service. As a one off special offer, from now until the end of February March if an ex BE customer migrates to our Home::1 tariff then we can include a /30, /29 or /28 block of IPv4 in additional to the usual IPv6 blocks, for no extra cost.
Information about Home::1 is here: http://aa.net.uk/broadband-home1.html Do mention this offer when ordering.
Do see our page about what we do when we run out of IPv4 though: http://aa.net.uk/kb-broadband-ipv6-plans.html
Update
3 Mar 13:30:56
Offer continued until the end of March.
Started 17 Jan 16:00:00

27 Feb 20:40:00
Details
27 Feb 20:29:14
We are seeing some TT lines dropping and a routing problem.
Update
27 Feb 20:39:20
Things are ok now, we're investigating. This looks to have affected some routing for broadband customers and caused some TT lines to drop.
Resolution We are not entirely sure what caused this, however we do believe it to be related to BGP flapping. This also looks to have affected other ISPs and networks too.
Started 27 Feb 20:18:00
Closed 27 Feb 20:40:00

16 Feb 17:59:00
Details
16 Feb 18:12:15
All lines reconnected right away as per normal backup systems, but graphs on the "B" LNS have lost history before the reset. The exact cause is not obvious yet, but at the same time there is yet another of these quite regular attacks on ZyXEL routers which adds to confusion. As advised on another status post there are changes to ZyXEL router config planned to address the issue.
Broadband Users Affected 33.33%
Started 16 Feb 17:58:00
Closed 16 Feb 17:59:00

24 Feb 16:00:00
Details
24 Feb 16:01:15
Some work was done today (following issues at the weekend). This work is on part of a redundant pair of routers, and so should have no impact. The normal redundancy aspects have worked, with changes to routing and VRRP, but for some unknown reason there are issues with BGP announcements internally which are causing blips on external connectivity to some Ethernet customers. The work has been completed but we are now investigating how this has had an impact on services so they can be avoided in future.
Started 24 Feb 15:45:00
Closed 24 Feb 16:00:00
Previously expected 24 Feb 16:00:00

24 Feb 17:00:00
Details
24 Feb 12:08:36
We will be doing some maintenance some of our routers and switches in Maidenhead this afternoon. Customers should not be affected by this, but this time should be considered 'at risk'.
Resolution Works completed.
Started 24 Feb 13:00:00
Closed 24 Feb 17:00:00
Previously expected 24 Feb 16:00:00

24 Feb 12:00:00
Details
11 Jan 08:42:32
Since around 2am, as well as a short burst last night around 19:45, we have seen some issues with some lines. This appears to be specific to certain types of router being used on the lines. We are still investigating this.
Update
11 Jan 10:53:53
At the moment, we have managed to identify at least some of the traffic and the affected routers and block it temporarily. We'll be able to provide some more specific advice on the issue and contact affected customers in due course.
Update
13 Jan 14:07:56
We blocked a further IP this morning.
Update
15 Jan 08:17:47
The issue is related to specific routers, and is affecting many ISPs. In our case it is almost entirely zyxel routers that are affected. It appears to be some sort of widespread and ongoing syn flood attack that is causing routers to crash and resulting in loss of sync. We are operating some source IP blocking temporarily to address these issues for the time being, and will have a simple button on our control pages to reconfigure zyxel routers for affected customers shortly.
Update
7 Feb 10:24:07
Last night and this morning there was another flood of traffic causing ZyXELs to restart. We suggest changing the web port to something other than 80, details can be found here: http://wiki.aa.org.uk/Router_-_ZyXEL_P660R-D1#Closing_WAN_HTTP
Update
13 Feb 10:44:41
We will be contacting ZyXEL customers by email over the next few days regarding these problems. Before that though, to verify our records of the router type, we will be performing a 'scan' of customer's WAN IP addresses. This scan will involve downloading the index page from the WAN address.
Update
20 Feb 21:34:54
Customers with ZyXELs online have been contacted this week regarding this issue.
Update
24 Feb 11:17:13
As per email to affected customers, we are updating the http port on ZyXEL routers today - Customers will be emailed as their router is updated.
Resolution Affected customers have been notified, tools in place on the Control Pages for customers to manage the http port and where appropriate ZyXEL routers have had their http port and WAN settings changed.
Broadband Users Affected 5%
Started 11 Jan 02:00:00
Closed 24 Feb 12:00:00

22 Feb 13:00:00
Details
22 Feb 08:03:44
A disk server has failed, it is impacting all web sites we host and email. Engineers are working on this now.
Update
22 Feb 10:08:38
There is a major issue with one of the disk servers, and we are planning to switch to a backup, but that is likely to involve an engineer visit to the data centre.
Update
22 Feb 10:16:44
Engineer is on his way to the data centre now.
Update
22 Feb 11:15:40
This is looking more complex than expected - we have switched the secondary controller, but there are issues with one of the disk arrays as well. Engineer still on site.
Update
22 Feb 11:17:48
Disk array is rebuilding now. We should have email working shortly and then web pages once the disk array rebuilds.
Update
22 Feb 11:57:48
Web space up, and mail servers being reconnected to disk array now.
Update
22 Feb 12:11:56
Issues with web pages again, investigating.
Update
22 Feb 12:14:17
The secondary disk server is now showing problems too. We are working on it.
Update
22 Feb 12:32:03
This is proving to be quite a serious issue - we appear to have issues with two separate disk controllers and with some of the RAID disks and with the file system on one of the disks. This is a very odd multiple failure, especially given that all of this is monitored constantly and was not showing any issues yesterday. We do have daily backups, so if all else fails there are ways to get service restored with backups and some loss of recent emails or changes. At this stage we are working to repair the failed file systems before considering that move.
Update
22 Feb 12:35:22
It looks like we have the mail file store repaired and mail should be back on line shortly.
Update
22 Feb 12:38:27
Web pages back.
Update
22 Feb 12:40:20
Incoming email should now be working again.
Update
22 Feb 12:44:44
We are checking all mail and web servers now to confirm all is well again.
Resolution Obviously this sort of multiple failure is somewhat unexpected. We do have plans for new disk servers anyway, and this type of failure will be considered as part of that system design.
Started 22 Feb 00:39:00
Closed 22 Feb 13:00:00
Previously expected 22 Feb 13:00:00

25 Sep 2013 16:20:00
Details
25 Sep 2013 16:09:44
We're currently investigating a problem with our disk storage server that runs our email and web space. We are investigating.
Update
25 Sep 2013 16:18:26
The server is being restarted at the moment.
Update
25 Sep 2013 16:20:41
Ser server is back on line, we are doing post-boot up checks etc to being the services back online.
Update
25 Sep 2013 16:20:58
Web services are now back online.
Started 25 Sep 2013 16:00:00
Closed 25 Sep 2013 16:20:00

22 Feb 08:00:00
Details
22 Feb 07:56:22
There seems to have been something going on between 2am and 3am. We even had some incidents in BT, but whatever was going on managed to cause an unexpected restart of on of our LNS ("B") at just after 3am. So graphs before then are lost. At 7:55 lines that ended up on the "D" LNS were moved back to the "B" LNS causing a PPP restart.
Broadband Users Affected 33.33%
Started 22 Feb 03:00:00
Closed 22 Feb 08:00:00
Previously expected 22 Feb 08:00:00

20 Feb 18:18:00
Details
20 Feb 09:20:19
We are seeing some lines unable to log in since a blip at 02:49. We are contacting BT. These lines are in sync, but PPP is failing. It looks like a number of BT RASs are affected, including 21CN-BRAS-RED9-GI-B and 21CN-BRAS-RED1-NT-B.
Update
20 Feb 09:31:18
BT were already aware of the problem and are investigating.
Update
20 Feb 12:23:12
These lines are still down, we are chasing BT.
Update
20 Feb 13:21:20
BT believed this issue had been fixed. We have supplied them with all of our circuits that are down. This is being supplied to TSO and we should have an update in the next hour.
Update
20 Feb 14:26:44
A new incident has been raised as BT thought the issue was fixed.
Update
20 Feb 14:27:56
The issue is apparently still being diagnosed.
Update
20 Feb 21:17:48
BT fixed this at 18:18 this evening.
Update
20 Feb 21:34:04
BT say:
BT apologises for the problems experienced today by WMBC customers and are pleased to advise the issue has been fully resolved following the back out of a planned work completed overnight. BT is aware and understands the fault which occurred and have engaged vendor support to commence urgent investigations to identify the root cause.
The BT Technical Services teams have monitored the network since the corrective actions taken at 18:04 and have confirmed the network has remained stable.
Broadband Users Affected 0.20%
Started 20 Feb 03:49:00
Closed 20 Feb 18:18:00

20 Feb 10:00:00
Details
20 Feb 10:24:43
In addition to https://status.aa.net.uk/1891 there is a UK wide problem with lines logging in. This is affecting other ISPs, and affecting a small number of lines. BT are already aware.
Update
20 Feb 11:07:55
BT are saying this is now fixed. We saw affected lines come back online just after 10am. BT say about half of the UK 21CN WBC lines were affected, however, we only saw a few dozen lines affected.
Started 20 Feb 09:00:00
Closed 20 Feb 10:00:00

10 Feb 14:00:00
Details
10 Feb 13:36:59
Some customers are experiencing problems making and receiving calls, this is being investigated.
Update
10 Feb 14:07:47
A particular type of VoIP configuration looks to be the cause of this. We are working on a fix.
Update
10 Feb 14:41:12
The affected configuration has been changed. The VoIP service should be back to normal, but we are monitoring the situation.
Closed 10 Feb 14:00:00

1 Feb 09:00:00
Details
1 Feb 03:38:03
Lines: 100% 21CN-REGION-PR dropped at 2014-02-01 03:36:28
We have advised BT
This is likely to have affected multiple internet providers using BT
Broadband Users Affected 1%
Started 1 Feb 03:36:28 by AAISP automated checking
Closed 1 Feb 09:00:00
Cause BT

6 Feb 10:00:00
Details
6 Feb 02:07:02
Lines: 100% 21CN-REGION-DY dropped at 2014-02-06 02:05:49
We have advised BT
This is likely to have affected multiple internet providers using BT
Broadband Users Affected 1%
Started 6 Feb 02:05:49 by AAISP automated checking
Closed 6 Feb 10:00:00
Cause BT

11 Feb 22:27:34
Details
3 Feb 16:19:38

We have a fault open with BT regarding the Harvington Exchange. We are seeing packet loss, typically between 8am and 2am, and getting up to 20% at peak times in the evening.

BT have already tried resetting the line card, but this has not worked.

BT are still investigating.

Update
3 Feb 16:22:45
Example graph:
Update
3 Feb 20:34:34
This has been escalated within BT. Other ISPs are seeing a similar issue. Currently, BT's 'Technical Services' are investigating the problem.
Update
5 Feb 10:16:58

BT have worked at the exchange early hours of this morning to try and resolve the issue. We will have to wait until around 3pm today to see if the heavy packet loss has been fixed.

The details from BT are as follows: "The technical team have worked all night on this issue. An engineer was sent to the exchange in the early hours of this morning and has reseated several IML cables in the network to see if this alleviates the issue. Ping testing has been carried out extensively since the reseats and where there was small packet loss seen prior to the reseat these are now proving to be totally clear."

Update
5 Feb 16:43:59
Looks like the amount of loss is increasing. BT are still investigating.
Update
6 Feb 11:30:34
From BT: Will get this info back over now and ensure tech services are involved to get to the bottom of this issue as agree this is really frustrating that we cannot find the route cause here
Update
7 Feb 09:06:05
Chasing BT for an update
Update
7 Feb 09:56:20
The controller card was reset rather than changed at 02:56am this morning and TSO are waiting on confirmation now if this has made a difference.
Update
10 Feb 09:22:15
BT's efforts over the weekend has not fixed the problem. We will be chasing BT again.
Update
10 Feb 10:01:42
BT are looking to see if it is possible to move our affected lines on to a different SVLAN. (in short, a different link out of the Exchange). We'll update this post when we get an update from BT.
Update
10 Feb 17:00:06
BT are planning to move the lines to a different SVLAN, we're not sure when this will be done yet though. We'll update again when we have further information.
Update
11 Feb 22:28:06
BT have moved the lines to a different SVLAN, and the packet loss problem has gone away.
Started 21 Jan 09:00:00
Closed 11 Feb 22:27:34

01 Nov 2013 11:00:00
Details
01 Nov 2013 10:44:13
We're experiencing problems with the C server. Ongoing calls should continue, but registrations will break which requires phones to log in again. This also affects the A server, as A routes calls through C. We're investigating.
Update
01 Nov 2013 11:18:03
Rebooting the C server. Ongoing calls through this server will be cut off. Sorry!
Update
01 Nov 2013 11:22:44
Now looking better. We are monitoring, but please let support know if you have any problems.
Started 01 Nov 2013 10:00:00
Closed 01 Nov 2013 11:00:00