Quantcast
OpenSSL/Heartbleed Vulnerability - Page 4 - Beyond.ca - Car Forums
Page 4 of 5 FirstFirst ... 3 4 5 LastLast
Results 61 to 80 of 86

Thread: OpenSSL/Heartbleed Vulnerability

  1. #61
    Join Date
    Sep 2012
    Location
    Calgary, AB
    Posts
    1,654
    Rep Power
    87

    Default

    Originally posted by rage2

    Then that's fine. I'm just saying that there's a LOT of people out there who upgrade for the hell of it, not for functionality. You're adding new features that you would never use, and open yourself up to bugs for those new features because people make mistakes when writing new code. I'm willing to guess that most of the vulnerable servers out there didn't need anything from OpenSSL 1.x, and would've operated just as fine with OpenSSL 0.9.x.
    UndrgroundRider explained it.

    Anyhow, OpenSSL 1 is two years old! Jesus, at what point do you decide when version to use? How can you even know that OpenSSL 0.9 would've been any more secure?

  2. #62
    Join Date
    Jan 1970
    Location
    YYC
    My Ride
    1 x E Class Benz
    Posts
    23,609
    Rep Power
    101

    Default

    I didn't know 0.9 isn't supported. If it's not then that's a bad example.

    I assumed that there's a stable version of 0.9 and a stable version of 1.0 (both supported), in which case I would choose 0.9 since there's been much more time and scrutiny with that code base, vs newer features in 1.0 that's not required.

    I don't use OpenSSL, but my point still stands. I would choose an older supported release vs new releases with features I don't need. If 0.9 has vulnerabilities that can't be patched then obviously that's a bad version to be on and enough of a reason to upgrade.

    Your odds of a vulnerability with less features and code is less than a newer version with more features and code. When would you upgrade? When there's a business/feature need to do so.
    Originally posted by SEANBANERJEE
    I have gone above and beyond what I should rightfully have to do to protect my good name

  3. #63
    Join Date
    Jun 2003
    Location
    Alaska
    My Ride
    Model S
    Posts
    2,034
    Rep Power
    26

    Default

    Originally posted by UndrgroundRider



    Ninja edit: That's really only a concern if TLS-auth wasn't being utilized. If that is the case then the operator of the server intentionally deviated from the recommended installation settings to make the security of the VPN much weaker.
    Yeah, nice fix :P Your original text was incorrect, the vuln happens prior to cert validation. tls-auth only protects you because it adds an HMAC. There is no shortage of VPNs not using that option. Especially where "applicances" are involved.

    That was just one example. Email clients are also an easy target (if not using Exchange).

    Originally posted by rage2
    I didn't know 0.9 isn't supported. If it's not then that's a bad example.

    I assumed that there's a stable version of 0.9 and a stable version of 1.0 (both supported), in which case I would choose 0.9 since there's been much more time and scrutiny with that code base, vs newer features in 1.0 that's not required.

    I don't use OpenSSL, but my point still stands. I would choose an older supported release vs new releases with features I don't need. If 0.9 has vulnerabilities that can't be patched then obviously that's a bad version to be on and enough of a reason to upgrade.

    Your odds of a vulnerability with less features and code is less than a newer version with more features and code. When would you upgrade? When there's a business/feature need to do so.
    0.9.x is still supported and you're correct. Not sure why "OpenSSL developer" is jumping down throats. The main reason everyone upgraded is because openssl provides features system-wide, and a lot of applications want to make use of those. But yeah, if you have a single purpose webserver with 0.9.x and the latest patches, you'd be fine.

  4. #64
    Join Date
    Apr 2004
    Location
    Calgary
    Posts
    2,093
    Rep Power
    44

    Default

    Hmm...interesting discussion here.

    I can understand not bothering to update existing servers unless there is a true business requirement (I work in corp IT so I've seen my share of "legacy" software), but are you guys seriously saying you would choose a 2 year old version of software over something maybe only 6 months old on a brand new build? Not saying you need the latest developer build but really why would anyone go with 0.9 over 1.0 on a new server? I just can't see it.

  5. #65
    Join Date
    Jun 2005
    Location
    Calgary, AB
    My Ride
    '16 Tacoma TRD Sport
    Posts
    268
    Rep Power
    19

    Default

    Originally posted by sabad66
    Hmm...interesting discussion here.

    I can understand not bothering to update existing servers unless there is a true business requirement (I work in corp IT so I've seen my share of "legacy" software), but are you guys seriously saying you would choose a 2 year old version of software over something maybe only 6 months old on a brand new build? Not saying you need the latest developer build but really why would anyone go with 0.9 over 1.0 on a new server? I just can't see it.
    I think this whole Heartbleed situation is a perfect reasoning for why, albeit 20/20 in retrospect. Add to that the new revelation that the NSA has been using the exploit for the past 2 years to spy/snoop/etc.

  6. #66
    Join Date
    Dec 2004
    Location
    Back to YYC
    My Ride
    2008 Impreza WRX
    Posts
    139
    Rep Power
    0

    Default

    Basically, we're pretty fucked now.

    https://twitter.com/indutny/status/454773820822679552

  7. #67
    Join Date
    Mar 2008
    Location
    Calgary
    My Ride
    Busa
    Posts
    404
    Rep Power
    17

    Default

    Originally posted by googe
    Yeah, nice fix :P Your original text was incorrect, the vuln happens prior to cert validation. tls-auth only protects you because it adds an HMAC. There is no shortage of VPNs not using that option. Especially where "applicances" are involved.

    That was just one example. Email clients are also an easy target (if not using Exchange).
    There's no shortage of vulnerable servers in general. I don't think anyone here is debating that. Even exchange servers are not immune in situations where they use external services for archival/anti-spam purposes. The issue is with all of the "advice" being given that is contrary to actual recommendations from the security community. Such as "But now every person on the planet is saturating memory blocks with their passwords...ripe for the picking" and "I'm just saying that there's a LOT of people out there who upgrade for the hell of it, not for functionality. You're adding new features that you would never use, and open yourself up to bugs for those new features because people make mistakes when writing new code."

    Originally posted by googe
    0.9.x is still supported and you're correct.
    I was responding to the general advice rage2 gave about not upgrading unless you need new features. That's terrible advice and I already explained why in my previous post. I'd rather be secure from known exploits than worrying about 0-day exploits. This is a balance of risks. You're way more likely to get compromised if you have unpatched software vulnerable to publicly known exploits. People make tons and tons of money simply scanning for these vulnerable systems using exploits that have been patched years ago. It's a stupidly lucrative business for how simple it is to mitigate.

    I also said there's nothing wrong with sticking to the latest version of a stable branch. Maybe that's what rage2 meant to say, maybe it wasn't, I don't know, but what he actually said was very bad advice.

    Originally posted by googe
    Not sure why "OpenSSL developer" is jumping down throats. The main reason everyone upgraded is because openssl provides features system-wide, and a lot of applications want to make use of those. But yeah, if you have a single purpose webserver with 0.9.x and the latest patches, you'd be fine.
    I submitted a number of patches over the years. That doesn't make me an OpenSSL developer. I'm not part of the core development team, and I've never claimed otherwise. There are thousands of people who have contributed patches. Probably tens of thousands.

    Dependencies are only part of the reason OpenSSL was upgraded in stable distributions. A large part had to do with security concerns, such as lack of TLS 1.2 support, less auditing of the code, and a general need for forward momentum. Implementing new features for web services isn't as simple as adding it to a code base. It requires general adoption to actually have impact. OpenSSL 0.9 doesn't support TLS 1.1/1.2, which is a big concern because there are established security risks with SSL2-3 and TLS 1.0.
    Last edited by UndrgroundRider; 04-11-2014 at 07:31 PM.

  8. #68
    Join Date
    Apr 2009
    Location
    Nowhere
    Posts
    6,852
    Rep Power
    27

    Default

    ...
    Last edited by Sugarphreak; 07-31-2019 at 03:32 PM.

  9. #69
    Join Date
    Nov 2009
    Location
    Kelowna
    My Ride
    Volvo S60 & Jeep GC
    Posts
    701
    Rep Power
    15

    Default

    Originally posted by Sugarphreak
    CRA lost nearly 1 million Social Insurance Numbers because of this


    After "researchers" made it public, they hacked in before CRA could put a stop to it.

    http://www.cbc.ca/news/business/hear...nada-1.2609192

    You said 1 million, but the article you quote says 900?? Big difference...

  10. #70
    Join Date
    Apr 2009
    Location
    Nowhere
    Posts
    6,852
    Rep Power
    27

    Default

    ...
    Last edited by Sugarphreak; 07-31-2019 at 03:32 PM.

  11. #71
    Join Date
    Dec 2004
    Location
    Back to YYC
    My Ride
    2008 Impreza WRX
    Posts
    139
    Rep Power
    0

    Default

    Tax procrastination saves my ass .

  12. #72
    Join Date
    May 2003
    Location
    YYC
    My Ride
    WRX, Audi A4 Avant
    Posts
    826
    Rep Power
    21

    Default

    Originally posted by Sugarphreak


    I said nearly 1 million.... so 90%, that is like an A+



    EDIT: Fuck... 900, just 900... not 900 thousand? I pulled a MARth, lol
    lol, freaked me out there a second

  13. #73
    Join Date
    Jun 2003
    Location
    Alaska
    My Ride
    Model S
    Posts
    2,034
    Rep Power
    26

    Default

    What's odd is that exploiting this doesn't leave any trace in your logs, so it doesn't seem likely that they could say for certain how many records were accessed or what the extent of the damage was.

  14. #74
    Join Date
    May 2010
    Location
    Calgary
    My Ride
    8P 3.2
    Posts
    50
    Rep Power
    0

    Default

    Originally posted by googe
    What's odd is that exploiting this doesn't leave any trace in your logs, so it doesn't seem likely that they could say for certain how many records were accessed or what the extent of the damage was.
    Yeah they can grab random memory blocks. I suppose its possible they got 900 SINs from it but I doubt they could put it all together to actually make use of the information. Nor could the CRA be sure of what SINs they got unless they read the memory at the time in which they halted operations, which would mean it had likely been going on a lot longer than the CRA is admitting to. They just have the information that was available when they interrupted it.

    My guess is the attacker got the authentication information for some sort of admin account and used that to access the SINs the old fashioned way and left a trail. But I am purely guessing.

    Either way, it's likely much worse than they know and/or are admitting to.

    Root issue IMO is:

    link

    The key moment arrived at about 11 o’clock on New Year’s Eve, 2011. With 2012 just minutes away, Henson received the code from Robin Seggelmann, a respected academic who’s an expert in internet protocols. Henson reviewed the code — an update for a critical internet security protocol called OpenSSL — and by the time his fellow Britons were ringing in the New Year, he had added it to a software repository used by sites across the web.
    Don't work at 11pm on new years eve. All technical reasons or details aside, no one can work at 11pm on new years eve and be free and clear and focused. This was not a root issue with OpenSSL but some coder who simply worked too late and made a blatant, and obvious, oppsie. And his peer reviewer reviewed it at the same time. Even if they weren't drunk, which is unlikely, 11pm is too late to be working.

    It's not a big code mystery or lack of funding or support. It's two guys being inattentive at 11pm on new years eve.

  15. #75
    Join Date
    Jan 2004
    Location
    Calgary, Alberta
    My Ride
    Bicycle
    Posts
    9,278
    Rep Power
    49

    Default

    Originally posted by googe
    What's odd is that exploiting this doesn't leave any trace in your logs, so it doesn't seem likely that they could say for certain how many records were accessed or what the extent of the damage was.
    Unless there is some sort of perimeter network tracking, I really doubt they really know.

    May be that's why they qualify their finding as 6 hrs, as in that's all their perimeter logging is good for.

    If they used the code and the exploit has been around for 2 years, I'm sure people in the know is already grabbed everything they needed, like the NSA.

  16. #76
    Join Date
    May 2010
    Location
    Calgary
    My Ride
    8P 3.2
    Posts
    50
    Rep Power
    0

    Default

    Originally posted by Xtrema


    Unless there is some sort of perimeter network tracking, I really doubt they really know.

    May be that's why they qualify their finding as 6 hrs, as in that's all their perimeter logging is good for.

    If they used the code and the exploit has been around for 2 years, I'm sure people in the know is already grabbed everything they needed, like the NSA.
    Every security appliance has the ability to collect data on the edge network. The issue is how much they can store. The answer is usually not very much. A couple hours maybe to an appliance local collector, depending on the make/model. I doubt the CRA exports the logs to an external collector for long term retention. The volume of information would be astronomical if logged on an edge appliance level. In our network maintaining logs for 9 offices with a total of 200 users is around 600GB with a one month retention. I doubt the CRA invested in enough resources to maintain external edge appliance logs for the country. Space issues aside the database management would be unwieldy for that amount of data. External collectors for security appliances are clunky at best, never seen on that can act reliably with more than 1TB of information.

    I bet they log access within the application though which is how they hit their 6 hour number. What account accessed what and when. But again, its all a guess. The fact access like that could be exposed externally is a major no-no but you never know. Application logging would be the only thing I can think of that can explain how they know what accounts were accessed. Meaning the info was not a random memory grab but the use of an admin account that left traces. An admin account that could have been grabbed via heartbleed then used to log into the CRA site.

    But good luck getting confirmation on any of this, will always be privileged information.

  17. #77
    Join Date
    Sep 2006
    Location
    calgary / alberta
    My Ride
    VW R32 Turbo
    Posts
    785
    Rep Power
    19

    Default

    Originally posted by syscal
    Was sent a threat by our shitty data center today.



    #1 - it's not that kind of bug
    #2 - don't threaten your clients, I guarantee my lawyers are better
    #3 - if being inside the data center somehow gives me different access to your other client's racks, you're doing it wrong!

    Thank goodness we're touring Q9 Monday, no more downtime!
    Who is this DC?
    Originally posted by sputnik
    Cell providers are the next Blockbuster video stores.

  18. #78
    Join Date
    Jun 2003
    Location
    Alaska
    My Ride
    Model S
    Posts
    2,034
    Rep Power
    26

    Default

    Originally posted by Sugarphreak

    After "researchers" made it public

    Why is researchers in scare quotes?

  19. #79
    Join Date
    Jun 2003
    Location
    YWG
    Posts
    3,119
    Rep Power
    24

    Default

    Originally posted by Xtrema


    Unless there is some sort of perimeter network tracking, I really doubt they really know.

    May be that's why they qualify their finding as 6 hrs, as in that's all their perimeter logging is good for.

    If they used the code and the exploit has been around for 2 years, I'm sure people in the know is already grabbed everything they needed, like the NSA.
    There are tons of different ways to collect that information. In my line of work we retain about 14 days worth of detailed logging.

    - Enterprise packet capturing (Infinistream)
    - Netflow data
    - Reverse proxy logs
    - Load balancer logs
    - W3C logs from the server

    I suspect that the CRA used logs to come up with the 900 SIN figure.

    The six hour timeframe comes from the time between CERT/NIST making the bug public and the time that the CRA shut down HTTPS access to Netfile.

    Given that the bug was there for a year an a half (and known by the NSA) it is really unknown how much information could have been gathered.
    Last edited by sputnik; 04-15-2014 at 06:26 AM.

  20. #80
    Join Date
    Jun 2003
    Location
    YWG
    Posts
    3,119
    Rep Power
    24

    Default

    Originally posted by frizzlefry
    Every security appliance has the ability to collect data on the edge network. The issue is how much they can store. The answer is usually not very much. A couple hours maybe to an appliance local collector, depending on the make/model. I doubt the CRA exports the logs to an external collector for long term retention. The volume of information would be astronomical if logged on an edge appliance level. In our network maintaining logs for 9 offices with a total of 200 users is around 600GB with a one month retention. I doubt the CRA invested in enough resources to maintain external edge appliance logs for the country. Space issues aside the database management would be unwieldy for that amount of data. External collectors for security appliances are clunky at best, never seen on that can act reliably with more than 1TB of information.
    I work in government network security and auditing and data collection is a pretty big deal, so I suspect it is an even bigger deal for the CRA.

    Running a SIEM like ArcSight, RSA or Trustwave is an easy way to centralize logging of all of your network appliances and enhance correlation. Throw a pile of additional SAN space and you can easily keep live logs for weeks and archive them compressed for years. It wouldn't surprise me if the CRA holds Netfile logs forever on some form of media.

    Going one step further would be to implement a product like Netscout within the network (or at least for the servers accessed externally) to capture all packet traffic (with full data). This is done by running SPAN sessions and capturing all traffic. Again this really comes down to adding more disk to increase retention. In the grand scheme of things 100 TB of available disk would go a long way and be pretty cheap.

    For you it might seem like overkill considering you are a small enterprise with 200 users. I work in an environment with 15,000 users and we invest millions into these types of infrastructure.

Page 4 of 5 FirstFirst ... 3 4 5 LastLast

Similar Threads

  1. Vulnerability in Netgear and other home Routers

    By ipeefreely in forum Computers, Consoles, and other Electronics
    Replies: 10
    Latest Threads: 12-20-2016, 11:22 PM
  2. Chinese hacked Tesla via browser vulnerability

    By Xtrema in forum Automotive News
    Replies: 13
    Latest Threads: 09-29-2016, 11:04 AM
  3. Bash exploit exposed: Shellshock (experts saying worse than heartbleed)

    By takkyu in forum Computers, Consoles, and other Electronics
    Replies: 5
    Latest Threads: 10-01-2014, 12:29 AM
  4. NFC for Android - Vulnerability +++?

    By jwslam in forum Computers, Consoles, and other Electronics
    Replies: 1
    Latest Threads: 07-26-2012, 10:43 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •