[{"content":"A 480-person consulting firm operating almost entirely in the cloud, with teams regularly handling client data, is also preparing to expand into federal, healthcare, and payment card markets.\nAcross twelve NIST CSF 2.0 controls, nine were rated as severe.\nThat number makes more sense in context. The workforce is mostly remote and many engagements require direct access to client systems and data. Each new market also brings its own set of compliance requirements, including FISMA, HIPAA, and PCI DSS.\nThe existing security program wasn’t built with any of that in mind.\n9Severe 3Major 0Moderate 0Minor I used the NIST CSF 2.0 framework to assess twelve controls across its six functions: Govern, Identify, Protect, Detect, Respond, and Recover. For each control, I rated the current risk level, maturity tier, and priority, then defined a realistic target state.\nNo control targets Tier 4, which would represent a fully optimized and adaptive program. This is a more realistic progression from the current baseline.\nThe controls that mattered most When nine controls share the same ratings, picking the top three means looking beyond the framework. You have to think about where the organization is headed and how that changes its risk exposure.\nMy three were identity and credential management, vulnerability identification, and supplier prioritization.\nThe first two are fairly standard picks. Weak identity governance is one of the most common ways organizations get compromised. Weak vulnerability management means you can’t act on what you can’t see. Both remain high priority even after remediation because they require continuous attention, not a one-time fix.\nSupplier prioritization is where the business context shifted the decision. An AI tool I used to analyze the same data selected leadership accountability instead. That is a reasonable framework-based answer, but it didn’t reflect where the organization is headed. Federal contracts introduce FISMA, healthcare introduces HIPAA Business Associate requirements, and payment processing introduces PCI DSS, with suppliers sitting at the center of all three. That makes third-party risk a critical dependency, not a secondary concern.\nWhat the AI got right and where it missed I used an AI tool to analyze the assessment data and generate the heat map visualization. The zone placements aligned with my ratings because the tool was working directly from the input data. That part was straightforward.\nNIST CSF 2.0 Assessment Cybersecurity risk heat map · 12 controls\nCurrent state Target state PRIORITY Critical High Medium Low Click any control to inspect full current and target state details\n✕ Tier movement: → The tool also produced a written analysis covering zone placements, top three risk selections, and an overall posture summary.\nThe written analysis is where the gaps appeared. As noted in the previous section, the top three risk picks excluded supplier prioritization because the business context was not included in the prompt. It also described the three Major-zone controls as outliers “only in degree, not in kind,” while also flagging that one of them was at Tier 0 with no process in place for public recovery communications. That is not a difference in degree, but a missing capability.\nThe model applied the framework consistently. However, it could not connect those results to what the organization is actually moving toward next. That interpretation layer sits outside the tool and is where human context still plays a critical role.\nWhat the target state actually means The target state is not a mature program, but a defined one where policies exist, processes are documented, and controls are actively being executed. For an organization starting at Tier 1 across twelve controls, that is the next realistic step.\nThe three highest-risk controls also happen to be the ones that enable progress across everything else. Identity governance has to work before access can be managed at scale. Vulnerability management has to be systematic before detection and response become meaningful. Supplier oversight has to be formalized before the firm can responsibly take on regulated clients.\n","permalink":"https://sadesing.github.io/posts/mapping-cyber-risk-with-nist-csf-2-0/","summary":"\u003cp\u003eA 480-person consulting firm operating almost entirely in the cloud, with teams regularly handling client data, is also preparing to expand into federal, healthcare, and payment card markets.\u003c/p\u003e\n\u003cp\u003eAcross twelve NIST CSF 2.0 controls, nine were rated as severe.\u003c/p\u003e\n\u003cp\u003eThat number makes more sense in context. The workforce is mostly remote and many engagements require direct access to client systems and data. Each new market also brings its own set of compliance requirements, including FISMA, HIPAA, and PCI DSS.\u003c/p\u003e","title":"Mapping Cyber Risk with NIST CSF 2.0"},{"content":"I was given a HIPAA compliance policy manual and asked to review it, propose revisions, and explain my reasoning.\nThe manual belonged to NAIPTA, the Northern Arizona Intergovernmental Public Transportation Authority and was adopted in April 2017. While it demonstrates a solid foundational commitment to HIPAA compliance, it is now approaching nearly a decade old.\nIt was last updated April 19, 2017. Almost eight years have passed, during which the HHS Office for Civil Rights has issued updated guidance, the 2013 Omnibus Rule has been in full effect, and proposed modifications to the HIPAA Privacy Rule were introduced in 2021. None of that was reflected in this manual.\nMy review identified substantive gaps, outdated provisions, technical errors, and structural weaknesses, as well as areas where the plan falls short of current regulatory expectations and best practices. I proposed revisions across seven categories, but four findings stood out to me as the most significant.\nNo remote work or mobile device policy An organization operating in 2026 cannot have a HIPAA compliance program that does not address remote work. HHS has published multiple cybersecurity bulletins on mobile device security and OCR has investigated and settled numerous cases involving lost or stolen unencrypted devices.\nOriginal manual:\nProposed changes: \u0026ldquo;Employees\u0026rdquo; vs. \u0026ldquo;individuals\u0026rdquo; HIPAA\u0026rsquo;s Privacy Rule was designed to protect the PHI of patients receiving health care services, not just employees of a covered entity. Throughout this manual, NAIPTA refers to the subjects of PHI as \u0026ldquo;employees,\u0026rdquo; which is confusing and potentially limiting given NAIPTA\u0026rsquo;s role as a covered entity that may handle PHI of individuals beyond its own workforce.\nOriginal manual: Proposed changes: A weak Business Associates policy The Business Associates Policy is among the weakest sections in the manual. It is very brief and lacks substantive guidance. The 2013 Omnibus Rule substantially expanded Business Associate obligations, and the manual does not address required BAA elements under 45 CFR Section 164.504(e)(2).\nOriginal manual: Proposed changes: Missing protections for special categories of PHI Substance use disorder records are governed not only by HIPAA but also by 42 CFR Part 2, which imposes significantly more restrictive confidentiality protections. Mental health records have additional protections under Arizona law (A.R.S. Section 36-509), HIV/AIDS information is subject to heightened requirements under A.R.S. Section 36-664, and genetic information is protected under GINA and 45 CFR Section 164.514(f).\nOriginal manual: Proposed changes: The NAIPTA HIPAA Compliance Policy Manual demonstrates a genuine organizational commitment to HIPAA compliance and covers many required elements in reasonable detail. The primary concerns with the current version are its age, several significant compliance gaps, and a handful of factual errors. Addressing these issues would better align the manual with current regulatory requirements and OCR enforcement priorities.\n","permalink":"https://sadesing.github.io/posts/one-hipaa-manual-four-major-gaps/","summary":"\u003cp\u003eI was given a HIPAA compliance policy manual and asked to review it, propose revisions, and explain my reasoning.\u003c/p\u003e\n\u003cp\u003eThe manual belonged to NAIPTA, the Northern Arizona Intergovernmental Public Transportation Authority and was adopted in April 2017. While it demonstrates a solid foundational commitment to HIPAA compliance, it is now approaching nearly a decade old.\u003c/p\u003e\n\u003cp\u003eIt was last updated April 19, 2017. Almost eight years have passed, during which the HHS Office for Civil Rights has issued updated guidance, the 2013 Omnibus Rule has been in full effect, and proposed modifications to the HIPAA Privacy Rule were introduced in 2021. None of that was reflected in this manual.\u003c/p\u003e","title":"One HIPAA manual, four major gaps"},{"content":"A court case about a patient’s roommate ended up teaching me more about privacy than I expected.\nThe case was Rogers v. NYU Hospitals Center. A hospital released the name of a patient’s roommate and someone questioned whether this violated HIPAA. The name came up during a legal proceeding and the patient argued it should have been protected. The court disagreed, noting that the name alone did not reveal any medical diagnosis or treatment. A wide range of rehabilitative services were offered at Rusk Institute of Rehabilitation Medicine, so knowing someone was there didn’t tell you anything about their condition. A name by itself wasn’t protected health information in this situation.\nWhat stood out to me was how much the decision depended on context. If I were a privacy officer, the places I’d worry about most are the ones where just knowing a patient’s name would reveal sensitive information. Specialized facilities like oncology units, HIV treatment centers, dialysis clinics, psychiatric hospitals, substance use programs. In these places, someone’s presence alone tells you their diagnosis. That’s when identity basically becomes health data.\nIt becomes even trickier in rural communities. Smaller populations make it easier to figure out who’s receiving what type of treatment, even in larger facilities. In those situations, a name has to be treated as protected information.\nLarge general hospitals, emergency departments, multidisciplinary rehab centers? These are usually less of a concern. They serve so many different kinds of patients that knowing someone was there doesn’t really tell you much about their diagnosis or treatment.\nContext is everything in privacy work. The same data point can create very different risks depending on where and how it is used. If you miss that, your privacy controls might end up either too loose or too restrictive.\n","permalink":"https://sadesing.github.io/posts/when-a-name-reveals-too-much/","summary":"\u003cp\u003eA court case about a patient’s roommate ended up teaching me more about privacy than I expected.\u003c/p\u003e\n\u003cp\u003eThe case was \u003cem\u003eRogers v. NYU Hospitals Center\u003c/em\u003e. A hospital released the name of a patient’s roommate and someone questioned whether this violated HIPAA. The name came up during a legal proceeding and the patient argued it should have been protected. The court disagreed, noting that the name alone did not reveal any medical diagnosis or treatment. A wide range of rehabilitative services were offered at Rusk Institute of Rehabilitation Medicine, so knowing someone was there didn’t tell you anything about their condition. A name by itself wasn’t protected health information in this situation.\u003c/p\u003e","title":"When a Name Reveals Too Much"},{"content":"Imagine you\u0026rsquo;re working with a 400-person global consulting firm that operates across healthcare, financial services, and government. AI is everywhere: custom models trained on proprietary client data, RAG systems pulling from internal documents, AI-assisted hiring, automated contract review, and predictive analytics delivered straight to the C-suite.\nNow someone asks you to figure out where the risks are. Not in theory, but in practice. Which systems could cause the most damage if something goes wrong? What framework do you apply? And how do you build a governance plan that actually fits a firm this size without over-engineering it?\nThis is the kind of challenge I worked through in a recent risk assessment.\nRanking the risks I started by ranking all 12 risk categories from the NIST AI Risk Management Framework in order of priority for this organization. Not every risk carries the same weight. The goal was to figure out which ones could do the most damage given how this firm actually operates.\nHere\u0026rsquo;s where the biggest risks landed and why.\n1. Information security This firm runs AI models across sensitive client data, uses third-party vendors, and operates globally. A single breach could trigger regulatory penalties, contract violations, and reputational damage all at once. This risk has the potential to impact the organization at every level.\n2. Data privacy The firm processes protected health information, financial records, and government data across multiple jurisdictions. HIPAA, GDPR, and sector-specific privacy laws all apply at the same time. A generative AI model can surface sensitive information on its own.\n3. Harmful bias The firm plans to use AI for hiring, client analytics, and public-facing reports. If those systems reflect biased training data, the damage might not show up right away. But when it does, it shows up as legal liability, broken client trust, or credibility issues that are difficult to rebuild.\nApplying NIST AI RMF The NIST AI Risk Management Framework breaks down into four core functions: govern, map, measure, and manage. Instead of treating these as abstract guidelines, I applied each one directly to the firm\u0026rsquo;s operations. That meant:\nDefining who owns each model and who reviews it Mapping where sensitive data flows through AI systems Building in structured testing like red-team exercises and privacy leakage scans Setting up incident response procedures for when something goes wrong The goal was a governance plan that connects to workflows the firm already uses, not a separate compliance layer that sits on top.\nWhere AI helped and where it missed I wanted to see how AI would approach the same problem. So I ran the scenario through a generative AI tool and compared its recommendations against my own. The AI produced a coherent governance plan that included:\nA board-level AI risk committee A dedicated chief AI risk officer A cross-functional review board A model risk management program similar to what large financial institutions use Some of that was genuinely useful. The ideas I kept:\nA centralized model inventory with clear lineage and a validation checklist Stronger vendor controls, specifically restricting data reuse and requiring breach notification timelines in contracts A small oversight committee with assigned model owners and second-line reviewers But other pieces didn\u0026rsquo;t fit. Universal provenance tracking on every output, immutable records for all advisory models, and multiple new executive roles would require resources that a 400-person firm doesn\u0026rsquo;t have. The recommendations were solid in theory, but oversized for the organization.\nThe final plan blends both. I kept my risk rankings and framework alignment, then borrowed the structural ideas from the AI that could realistically be implemented.\nPhased rollout The final governance plan needed to be something this firm could actually execute. I broke the implementation into three phases:\nFirst 90 days: inventory the top ten AI use cases, publish an acceptable use policy, and run one red-team exercise to surface early issues Months 3 to 6: introduce fairness testing and privacy leakage scans for the highest risk systems Months 6 to 12: formalize incident response drills and expand the practices that prove effective Each phase also accounts for the firm\u0026rsquo;s client base. Healthcare clients require HIPAA-aligned protections. Financial clients expect clear explanations of analytic methods. Government clients expect documentation and auditability. The governance structure had to work across all three without treating each one as a separate compliance effort.\nAI governance isn\u0026rsquo;t about having the perfect framework. It\u0026rsquo;s about knowing which risks matter most for a specific organization and building something realistic enough to actually get implemented.\n","permalink":"https://sadesing.github.io/posts/ai-risk-management-in-practice/","summary":"\u003cp\u003eImagine you\u0026rsquo;re working with a 400-person global consulting firm that operates across healthcare, financial services, and government. AI is everywhere: custom models trained on proprietary client data, RAG systems pulling from internal documents, AI-assisted hiring, automated contract review, and predictive analytics delivered straight to the C-suite.\u003c/p\u003e\n\u003cp\u003eNow someone asks you to figure out where the risks are. Not in theory, but in practice. Which systems could cause the most damage if something goes wrong? What framework do you apply? And how do you build a governance plan that actually fits a firm this size without over-engineering it?\u003c/p\u003e","title":"What AI Risk Management Actually Looks Like"},{"content":"My interest in privacy didn\u0026rsquo;t start with some big “aha” moment. It just kind of grew over time.\nI’ve been working as a frontend engineer for the past few years, mostly focused on accessibility compliance. My background in systems engineering and human factors shapes how I think about technology. I naturally look at it through the lens of how people actually experience it — does it help them, or does it subtly get in their way? This perspective started to shape how I thought about privacy too. I began to see that principles like consent, transparency, and control were just as important as usability.\nI found myself noticing privacy issues in everyday life. Data breaches were no longer just something you read about in the news. They were happening to people around me, myself included. I started paying attention to how companies collect and use data, how little protection we actually have in the United States compared to places like Europe, and how often personal information gets mishandled or exposed. Privacy started to matter to me in a way it hadn’t before.\nAround this same time, the tech industry was shifting. AI was starting to change how development work looked. On top of that, the rollback of DEI initiatives was cutting deeply into accessibility roles, which was the area I had been working in. It became pretty clear the market was narrowing and I needed to start thinking about what came next.\nSo, I took a step back and thought about all the skills I had accumulated over the years. My human factors background helped me understand how people interact with technology. Accessibility compliance meant working with regulations, audits, and policy frameworks. Frontend development gave me a practical understanding of how systems are actually built, including the privacy considerations I also had to address. When I looked at it this way, moving toward privacy work started to make a lot of sense.\nI began looking into graduate programs and came across Albany Law School\u0026rsquo;s MS in Cybersecurity and Data Privacy. The program caught my eye because of the mix of law and technical coursework, and it is well recognized in the privacy community. Albany Law is also a participating school in the Westin Scholar program, which recognizes students for excellence in privacy. It\u0026rsquo;s fully online too, which makes it possible for me to continue working while studying.\nIt felt like the right fit, so I applied, was accepted, and am now a few weeks into my first semester.\nI already have one master’s degree, so going back to school was definitely not part of my long-term plan. But the more I learn, the more it feels like the right decision.\nWhat draws me to privacy is how it blends technology, law, and ethics. I want to understand how AI is shaping the way data is used, how different countries approach privacy regulation and what it takes to design systems that protect people’s information instead of collecting as much as possible.\nLooking back, the twists and turns in my career have brought me to a place that surprisingly feels aligned. I\u0026rsquo;m not exactly sure where this path leads yet, but I am excited to be on it.\n","permalink":"https://sadesing.github.io/posts/path-to-privacy/","summary":"\u003cp\u003eMy interest in privacy didn\u0026rsquo;t start with some big “aha” moment. It just kind of grew over time.\u003c/p\u003e\n\u003cp\u003eI’ve been working as a frontend engineer for the past few years, mostly focused on accessibility compliance. My background in systems engineering and human factors shapes how I think about technology. I naturally look at it through the lens of how people actually experience it — does it help them, or does it subtly get in their way? This perspective started to shape how I thought about privacy too. I began to see that principles like consent, transparency, and control were just as important as usability.\u003c/p\u003e","title":"My Path to Privacy"},{"content":"Hi, I’m Sade. I’m a graduate student in cybersecurity and data privacy at Albany Law School. My background in systems engineering and human factors shapes how I approach problems. I’m interested in how people interact with technology and how those interactions can surface privacy, security, and compliance challenges.\nI started this blog to document what I’m learning, share projects I’m working on, and contribute to conversations around ethical and responsible tech. If you\u0026rsquo;re curious about my journey into privacy, you can read more here.\nIn my free time, I enjoy traveling, photography, exploring new cuisines, and spending time with family. I’m also passionate about giving back through community work.\n😊\nWhat I\u0026rsquo;m up to Reading Data Privacy, by Nishant Bhajaria with love, from sxm SXM Recently Exploring new places $ summer 2026 EU data privacy law healthcare compliance Studying EU data privacy law, healthcare compliance one cute latte, please Sipping Discovering new coffee spots ","permalink":"https://sadesing.github.io/about/","summary":"\u003cp\u003eHi, I’m Sade. I’m a graduate student in cybersecurity and data privacy at Albany Law School. My background in systems engineering and human factors shapes how I approach problems. I’m interested in how people interact with technology and how those interactions can surface privacy, security, and compliance challenges.\u003c/p\u003e\n\u003cp\u003eI started this blog to document what I’m learning, share projects I’m working on, and contribute to conversations around ethical and responsible tech. If you\u0026rsquo;re curious about my journey into privacy, you can read more \u003ca href=\"/posts/path-to-privacy/\"\u003ehere\u003c/a\u003e.\u003c/p\u003e","title":"About"}]