As previously discussed here, Congress enacted the Corporate Transparency Act (the “Act”) to require certain entities to file information about its beneficial ownership with the intent to prevent and combat money laundering, corruption, tax fraud and other illicit activity. Pursuant to the Act, the U.S. Treasury Department’s Financial Crimes Enforcement Network (“FinCEN”) has adopted regulations and will establish a private national database for the information collected.

FinCEN’s final rule is effective January 1, 2024. As this deadline approaches, we thought it would be helpful to take a deeper dive into the final rule to understand its requirements and exemptions. In the coming weeks, we will discuss various related topics including who is a beneficial owner, what entities are exempt and why to apply for a FinCEN identifier.

Who must file the report with FinCEN?

Each reporting company is responsible for filing accurate and complete information with FinCEN regarding the reporting company itself and the beneficial ownership of such company. An individual acting on behalf of the reporting company must certify that the information submitted is true, correct and complete.

A “reporting company” is “any entity that is created by the filing of a document with a secretary of state or any similar office under the law of a State or tribal jurisdiction,” and includes corporations, limited liability companies, limited partnerships and, where they exist limited liability partnerships. A reporting company can be either a domestic entity or a foreign entity formed under the law of a foreign country that registers to do business in any state or tribal jurisdiction.

A “beneficial owner” is any individual who, directly or indirectly, either exercises substantial control or owns or controls at least 25 percent of the ownership interests of a reporting company.

What information must be filed with FinCEN?

Reporting companies must provide identifying information about the company itself and its beneficial owners. If the company was formed on or after January 1, 2024, then the report must include information on the company applicant (as defined below) as well.

A reporting company must provide, its:

                (i)           full legal name,

                (ii)         all tradenames and doing business names regardless of whether such name is registered with any governmental authority,

                (iii)        current street address for the company’s principal place of business in the United States. If the company’s principal place of business is outside the United States, then the primary location in the United States where it conducts business. Companies cannot provide P.O. boxes or third party addresses (such as the address for the company formation agent),

                (iv)        jurisdiction of formation, whether state, tribal or foreign. Foreign reporting companies must also provide the jurisdiction where it first registered to do business in the United States, and

                (v)         taxpayer identification number.

For every beneficial owner, reporting companies must provide, his or her:

                (i)           full legal name,

                (ii)         date of birth,

                (iii)        current residential street address, and

                (iv)        the unique identifying number and the issuing jurisdiction from one of the following documents (including an image of such document): (a) a non-expired U.S. passport, (b) a non-expired identification document issued by a state, local government or Indian tribe, (c) a non-expired driver’s license issued by a state, or (d) if none of these documents have been issued to such individual, then a non-expired, foreign-issued passport.

Additionally, reporting companies formed on or after January 1, 2024, must provide the above information for every individual who directly files the document that creates the reporting company (the “company applicant”). If the company applicant formed or registered the company as part of their business, then the street address of such business may be provided instead of the company applicant’s residential street address.

When must the reporting company file the report with FinCEN?

Where must the reporting company file the report with FinCEN?

FinCEN is creating an online portal for reporting companies to provide the beneficial ownership information reports. It anticipates the online portal will be available starting January 1, 2024.


In the next few months, each entity that is a reporting company should collect the necessary information for itself and its beneficial owners. To ensure compliance with these regulations, all entities should review their internal procedures and organizational documents. Ideally, an entity’s corporate governance documents (e.g. shareholders’ agreement, operating agreement, partnership agreement, etc.) will require its owners to disclose the information described above.

For further information or guidance on revising your policies, procedures, and corporate governance agreements, please contact David Paseltiner.  

For anyone who thinks they can develop, deploy, use, or market artificial intelligence with impunity, think again.

Today the Federal Trade Commission (FTC), the Department of Justice Civil Rights Division (DOJ), the Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission (EEOC) released a joint statement (the “Joint Statement”) reminding the public that “[e]xisting legal authorities apply to the use of automated systems and innovative technologies just as they apply to other practices.” (See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.) Specifically, the Joint Statement warned that “automated systems marketed as ‘artificial intelligence’ or ‘AI’” have the potential to magnify problems that fall under agency enforcement authority, including “the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.” (See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.) In other words, automated systems/innovative technologies/artificial intelligence/AI are subject to agency authority and rules.

The Joint Statement outlined that these federal agencies are “responsible for enforcing civil rights, non-discrimination, fair competition, consumer protection, and other vitally important legal protections” and listed some of each agency’s prior advice concerning the potential negative impact automated systems labeled “artificial intelligence” or “AI” could have on such rights. (See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.)

Regarding unlawful discrimination in automated systems, the Joint Statement noted that (1) data and data sets, (2) model opacity and access, and (3) design and use can each contribute to potential discrimination in automated systems. (See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.) These warnings, however, are not new, and the Joint Statement reminded the public of this fact, outlining previous warnings given by each agency concerning the “potentially harmful uses of automated systems.” The Joint Statement outlined the following such oversight warnings:

If the Joint Statement was not clear enough for anyone designing, deploying, marketing, and using automated systems and artificial intelligence to understand the seriousness of this Joint Statement, the FTC’s release of the Joint Statement was even clearer, stating these federal agencies “resolved to vigorously enforce their collective authorities and to monitor the development and use of automated systems.” (See FTC Chair Kahn and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI.) The FTC release went on to remind the public: “There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.” (See FTC Chair Kahn and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI.)

What is the Takeaway?

In my prior article Beware, Quack-Quack Is In the AIr: Is Artificial Intelligence/AI Really So Smart?, I concluded: “Like the characters in 1984, today’s artificial intelligence speaks not with a brain but a larynx. It produces ‘noise uttered in unconsciousness’—at least for now.  We are still the humans in the room with voices connected to our brains—at least for now. It is time to use them to guide the conversation.”

In my article Unpacking the FTC’s Warning to “Keep Your AI Claims in Check”, I concluded that “the FTC is warning developers, sellers, and marketers of AI products to be very careful about testing, labeling, and advertising AI products and to ‘Keep your AI claims in check’ or risk experiencing the serious p(ai)n of a FTC investigation or enforcement action.”

My takeaway here is that the days of the wild west for the design, development, deployment, and marketing of artificial intelligence/AI are over; the proverbial federal agency sheriffs are coming to town. And just behind them are the litigators.

In December 2021, Shannon Boettjer, Esq. successfully completed the course Artificial Intelligence: Implications for Business Strategy through the Massachusetts Institute of Technology (MIT) in conjunction with MIT Sloan School of Management and MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Shannon has presented to the New York State Bar Association (NYSBA) Commercial and Federal Litigation Section Data Security and Technology Litigation Committee concerning the role of federal agencies in regulating the use, development, deployment, and marketing of artificial intelligence and the inherent potential benefits and risks AI poses to society.

On February 27, 2023, the Federal Trade Commission (FTC) business blog released a not so thinly-veiled warning to AI developers, sellers, and marketers that they have a duty of care when using the term “artificial intelligence” to market a product, stating: “one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.” The article advised that the FTC is prepared to use enforcement actions to keep AI claims in check. (See Federal Trade Commission Business Blog Keep your AI claims in check.)

Specifically, the article noted that “the companies that do the developing and selling” and the “marketers” behind the products are now on notice that AI products (1) must work as advertised and (2) must pass the efficacy versus risk test. (See id.) To punctuate the seriousness of this point, the FTC business blog warned: “false or unsubstantiated claims about a product’s efficacy are our bread and butter.” (bold emphasis added.) In other words, the FTC intends to take enforcement action against deceptive AI business advertising. (See id.)

The article set forth the following list of mistakes to avoid when developing, selling, and marketing AI products:

  1. Avoid deceptive advertising of AI products through exaggeration of AI claims. If the developer, seller, or marketer of an AI product has exaggerated what the AI product (or any AI product for that matter) can do, or has failed to rigorously test the AI product, or has failed to account for and address potential bias in outcomes, the FTC could bring an enforcement action for deceptive advertising. Specifically, the FTC warns: “Your performance claims would be deceptive if they lack scientific support or if they apply only to certain types of users under certain conditions.” (See Federal Trade Commission Business Blog Keep your AI claims in check.)
  2. Avoid overpromising the benefits of an AI product. The FTC guidance in this area was short and sweet: if you are advertising AI product enhancement to increase the price of a product or to influence human decision-making, you must be able to provide “adequate proof” to support any claims made about the AI product. Period. (See Federal Trade Commission Business Blog Keep your AI claims in check.)
  3. Know the risks of your AI product. AI products, often produced by third parties, may be developed through machine learning (ML) and/or deep learning (DL). The FTC warns that those who put AI products to market cannot rely on the “it wasn’t me” defense or the AI “black box”[1] defense to avoid liability for the “reasonably foreseeable risks and impact of your AI product[.]” This means that developers, sellers, and marketers of AI products alike must seriously consider and explore the risks and impact of any AI product prior to the AI product being released. (See Federal Trade Commission Business Blog Keep your AI claims in check.)
  4. Make straightforward and accurate AI claims. If the product claims to be AI-powered, the AI claim must be accurate. The FTC warns that inaccurate AI claims can and will be sniffed out by the FTC. (See Federal Trade Commission Business Blog Keep your AI claims in check.)

What is the Takeaway?

The takeaway from this recent FTC business blog guidance: the FTC is warning developers, sellers, and marketers of AI products to be very careful about testing, labeling, and advertising AI products and to “Keep your AI claims in check” or risk experiencing the serious p(ai)n of a FTC investigation or enforcement action.

In December 2021, Shannon Boettjer, Esq. successfully completed the course Artificial Intelligence: Implications for Business Strategy through the Massachusetts Institute of Technology (MIT) in conjunction with MIT Sloan School of Management and MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

[1] AI black box refers to “any artificial intelligence system whose inputs and operations aren’t visible to the user or another interested party. A black box, in a general sense, is an impenetrable system.” (See What is Black Box AI?) For a further explanation of how deep learning and neural networks operate to create the so-called AI black box, see id.; see also, my prior blog on Artificial Intelligence Beware: Quack-Quack is in the Air.

As he watched the eyeless face with the jaw moving rapidly up and down, Winston had a curious feeling that this was not a human being but some kind of dummy. It was not the man’s brain that was speaking, it was his larynx. The stuff that was coming out of him consisted of words, but it was not speech in true sense: it was a noise uttered in unconsciousness, like the quacking of a duck.  

Orwell, George. 1984. Penguin Classics (2021).

Anyone who has read George Orwell’s 1984 might remember this passage, or any of the other references to such mindless quacking. Orwell wrote about the importance of maintaining the connection of the human brain to individual (and collective) thinking, acting, and speaking, and warned of the dire consequences that would come if humans allowed their brains to disconnect. If ever there were a time to heed Orwell’s warning, it is now.

In November 2022, a new duck developed by the company OpenAI powered by Large Language Model (LLM) machine learning (ML) descended onto the world scene. Its name—ChatGPT. (See OpenAI, “Introducing ChatGPT”.) The launch promised that the AI-powered chatbot “interacts in a conversational way” and can “answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate questions.” (See id.) The seemingly so-cool bot has since made talking about AI fashionable among non-techies (and even lawyers). With AI on the (and manipulating) the brAIn, I figured this would be a good time to start writing about it in a way that non-techies (and lawyers) can appreciate and understand.

Though artificial intelligence has been lurking under the surface of much of what we do for years—think, for example, Siri, Alexa, Grammarly, chatbots, your Roomba—many, if not most, ordinary people still do not know much about it. The United States Congress, too, is struggling to understand the role of AI in society and legislation. Even the Supreme Court is confounded by these issues. It was only last month during oral arguments in the case Gonzalez v. Google LLC that several Supreme Court justices expressly stated that they were confused by the arguments before the Court concerning algorithms and the internet.[1] Thus, regardless of education level, many people find AI and algorithms befuddling. And only several months ago, before ChatGPT became all the rage, they also found the subject boring.

But AI and algorithms have very serious implications for society, business, and law. It is therefore time for average people, businesspersons, and lawyers (gasp) to begin understanding at least the basic principles of artificial intelligence. Otherwise, how can we engage in (as opposed to be confused by) the conversation, debate, and hype. 

To understand and follow AI, you must have a working relevant vocabulary. Below I have provided a few key terms with understandable definitions, explanations, and references to provide a good starting point for non-tech people in business and law to learn the common vocabulary to begin engaging in discussions and decision-making about AI.[2]

Key Vocabulary and Concepts

Artificial Intelligence. In describing artificial intelligence (AI), IBM writes: “At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving.” (See, What is Artificial Intelligence? By IBM.) The definition of artificial intelligence by IBM continues: “It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.” (See, id.) Significantly, this definition references both machine problem-solving, the “simple form” of AI, and the sub-fields that have been pushing the boundaries of AI for the past several years, such as machine learning and deep learning, described below.

Algorithm. According to TechTarget, “[a]n algorithm is a procedure used for solving a problem or performing a computation. Algorithms act as an exact list of instructions that conduct specified actions step by step in either hardware – or software-based routines.” (See is/definition/algorithm.) This definition of algorithm is easy to understand and visualize; it allows you to think of an algorithm like a recipe. When the recipe is well-written and followed exactly, the dish turns out as anticipated. When the recipe is missing ingredients, is printed out of order, and includes the wrong temperatures and measurements, the dish is prone to fail. In other words, the exactness of the recipe—and the algorithm—has a direct correlation to the outcome. 

Machine Learning. “Machine learning is a subfield of artificial intelligence, which is broadly defined as the capability of a machine to imitate intelligent human behavior. Artificial intelligence systems are used to perform complex tasks in a way that is similar to how humans solve problems.” (See Machine Learning Explained, Sara Brown, MIT Management Sloan School, April 21, 2021.) The operative phrase in this definition is “imitate intelligent human behavior.” Though machine learning (ML) can cause the machine to appear like it is thinking, reasoning, and using human-like judgment, it is not. Rather, the machine is typically describing, predicting, or prescribing outcomes based upon its interpretation of large data sets. (See, id.) For more information about AI generally and its future implications, see, Artificial Intelligence and the Future of Work.)

Neural Networks. IBM explains: “Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another. (See, What are Neural Networks by IBM.) Diagrams of neural networks in action depict how the machine takes in particular data/information, passes it through layers of neural networks working with the inputs (sometimes referred to as a black box), and produces an outcome after the process. (See, id.) The point here is that humans do not control or teach the machine algorithms as the inputs pass through the neural networks (and often do not even understand it). Rather, the machine learns on its own during the process; hence the term “machine learning.”[3] Much of the concern about bias and explainability in AI center around neural networks, machine learning, and deep learning. It goes without saying that machines working with garbage input will produce garbage output.

Deep Learning. “Deep learning is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain—albeit far from matching its ability—allowing it to “learn” from large amounts of data. While a neural network with a single layer can still make approximate predictions, additional hidden layers can help to optimize and refine for accuracy.” (See, What is deep learning? By IBM; see also, What is Artificial Intelligence? By IBM.) In case you missed it, deep learning (DL) “attempts to simulate the behavior of the human brain” to “learn” from large data sets. But just as I noted in the section on ML, this sub-section of ML is still only a facsimile of intelligent human behavior.

Natural Language Processing. “Natural language processing is a field of machine learning in which machines learn to understand natural language as spoken and written by humans, instead of the data and numbers normally used to program computers.” (See Machine Learning Explained, Sara Brown, MIT Management Sloan School, April 21, 2021; see also, What is natural language processing (NLP)? By IBM.) Natural language processing (NLP) “allows machines to recognize language, understand it, and respond to it, as well as create new text and translate between languages. Natural language processing enables familiar technology like chatbots and digital assistants like Siri or Alexa.” (See Machine Learning Explained, Sara Brown, MIT Management Sloan School, April 21, 2021.) NLP is what drives language-based AI programs. Until recently, it has been somewhat underwhelming for general application[4] to certain fields (like law).

Large Language Models.

Techopedia explains: “A large language model (LLM) is a type of machine learning model that can perform a variety of natural language processing (NLP) tasks, including generating and classifying text, answering questions in a conversational manner and translating text from one language to another.” (See, Techopedia Definition of Large Language Models.) The operative word in the term large language models is “large,” which “refers to the number of values (parameters) the model can change autonomously as it learns. Some of the most successful LLMs have hundreds of billions of parameters.” (See, id.)[5]

Generative AI. “Generative AI is a type of artificial intelligence technology that can produce various types of content including text, imagery, audio and synthetic data.” (See, What is Generative AI? Everything You Need to Know, by George Lawton.) George Lawton’s article provides a more detailed explanation of how generative AI has evolved and works. The key takeaway for this article is that generative AI generates content that looks and sounds impressive to humans. It is the AI that drives ChatGPT and a slew of other new technologies coming to market at record pace. Generative AI, unlike traditional NLP, is not underwhelming but overwhelmingly catching on (even in law).

Generative AI v. AI. “Generative AI produces new content, chat responses, designs, synthetic data or deep fakes. Traditional AI has focused on detecting patterns, making decisions, honing analytics, classifying data and detecting fraud.” (See, What is Generative AI? Everything You Need to Know, by George Lawton.)

Weak/Narrow AI. The terms “weak” or “narrow” AI have been used to describe the current state of artificial intelligence. As explained by IBM, however, the terms may be misnomers in that they suggest that the machine is not strong or powerful, which is not necessarily the case:

Weak AI—also called Narrow AI or Artificial Narrow Intelligence (ANI)—is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. ‘Narrow’ might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple’s Siri, Amazon’s Alexa, IBM Watson, and autonomous vehicles. (See, What is Artificial Intelligence? By IBM.)

The development of AI today has created a path of narrow AI bricks that, when combined and extended, are likely forming the path to general AI, described below, (a/k/a Nirvana (?), heaven (?), hell (?), human extinction (?)). Where this AI path takes us, in my view, will depend on how quickly and thoughtfully leaders, lawyers, and general citizens get involved in directing the future of AI.

Strong/General AI. I turn again to IBM to describe strong/general AI:

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)—also known as superintelligence—would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn’t mean AI researchers aren’t also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey. (See, What is Artificial Intelligence? By IBM.)

The key to general AI is that a machine with general AI would not be mimicking the intelligence, consciousness, and judgment of humans, it would actually possess those traits and surpass humans.

Why Talk About Orwell and AI Now?

AI is rapidly becoming ubiquitous in business and society, and generative AI technology will only hasten its prevalence. Now is the critical time for leaders and citizens alike to understand the machines that are guiding us–literally. Very recently, MIT professor Aleksander Mądry spoke before Congress, urging lawmakers to get involved in shaping the future of AI and not leaving it up to Big Tech, testifying:

We are at an inflection point in terms of what future AI will bring. Seizing this opportunity means discussing the role of AI, what exactly we want it to do for us, and how to ensure it benefits us all. This will be a difficult conversation but we do need to have it, and have it now[.]

(See, MIT professor to Congress: “We are at an inflection point” with AI Aleksander Mądry urges lawmakers to ask rigorous questions about how AI tools are being used by corporations, MIT Washington Office Publication Date March 10, 2023.) I could not agree more.[6]

I love reading George Orwell and learning about (and using) AI, but both require effort, thinking, and conscious intent. Like the characters in 1984, today’s artificial intelligence speaks not with a brain but a larynx. It produces “noise uttered in unconsciousness”—at least for now.  We are still the humans in the room with voices connected to our brains—at least for now. It is time to use them to guide the conversation.

In December 2021, Shannon Boettjer, Esq. successfully completed the course Artificial Intelligence: Implications for Business Strategy through the Massachusetts Institute of Technology (MIT) in conjunction with MIT Sloan School of Management and MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Shannon is passionate about fostering partnering with machines in business and law and empowering leaders and general citizens to engage in shaping the future of artificial intelligence in business, law, and society.

[1] (See also Transcripts of the oral arguments.)

[2] The purpose of this article is not to provide a comprehensive, technical explanation of artificial intelligence. Rather, it is meant to be a guide for business and legal professionals to become familiar with AI.

[3] This overly simplified description is intended to highlight the general process of machine learning. For a more detailed and technical explanation, see the referenced articles.

[4] Though I would argue that AI has been exciting and overwhelmingly underused by the vast majority of attorneys (particularly in eDiscovery), in part, because law schools and law firms alike have not yet made technological competence/literacy a core part of legal training for every law student and practicing attorney.

[5] For a more robust explanation of Large Language Models and their impact on AI, see The Next Generation of Large Language Models by Rob Toews published by

[6] While this article was being reviewed for print, Elon Musk and others signed an open letter seeking a six month pause on giant AI experiments .(See Pause Giant AI Experiments: An Open Letter.) Responding in large part to GPT, the Open Letter asserts: “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” (See id.) Like Professor Mądry, the signatories to the Open Letter opine that Congress and policy makers must be involved in the process, stating: “AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems.” (See id.)


The Adult Survivors Act (“ASA”) took effect in New York State on November 24, 2022.  The ASA amends the statute of limitations in New York for civil actions arising from sexual offenses committed against persons over the age of 18.  Survivors of sexual assault have a one-year window – until November 24, 2023 – to file civil lawsuits for their claims regardless of how long ago the abuse occurred and irrespective of whether the original statute of limitations has already expired.  In addition to the alleged abusers, potential defendants in ASA civil lawsuits can include employers and institutions based on negligent or intentional acts in allowing and/or facilitating the alleged abuse to occur.  The ASA covers a broad array of sexual offenses, ranging from groping and forcible touching to rape and sodomy.

The ASA is modeled on the Child Victims Act (“CVA”), which became effective in August 2019, and which amended the statute of limitations in New York for civil actions arising from sexual offenses committed against persons under the age of 18.  The one-year window for CVA civil lawsuits (extended due to the Covid pandemic) expired in August 2021.  Aside from alleged abusers, frequently sued employers and institutions in CVA lawsuits included schools, religious institutions, and scouting organizations.  Those same employers and institutions will also likely face additional civil lawsuits under the ASA.  However, the pool of potential defendants likely to face civil lawsuits under the ASA has broadly expanded to include virtually all business entities, municipalities and government agencies, not-for-profit organizations, hospitals and health care providers, and professional services firms, to name a few.  Survivors of sexual abuse will be able to sue employers and institutions in cases where the alleged sexual abuse occurred in or was associated with the employment relationship.

So what can employers and institutions do in anticipation of possible ASA civil lawsuits and/or upon being named as defendants in ASA civil lawsuits?  This article will present strategies for employers and institutions to consider when faced with ASA civil lawsuits, especially since some of the claims could involve alleged sexual abuse that occurred decades ago.

If you are aware of claims and/or allegations of sexual abuse in the workplace which may not have been pursued at the time that the sexual abuse occurred, you should search, gather, review, and preserve available paper and electronic files, correspondence, and documents related to any such historical claims.

To the extent any such material is still available, you should search, gather, review, and preserve paper and electronic files, correspondence, and documents relating to any historical office policies and procedures for the reporting of any claim of sexual abuse occurring in the workplace or otherwise associated with the employment relationship.

If you are sued in a ASA civil lawsuit:

  • determine the existence of any liability insurance policy (including excess policies) which could have been in effect at the time of the alleged abuse which affords coverage for your business or organization;
  • if any such liability insurance policies are located, provide timely written notice of claim (including a copy of the lawsuit) to the liability insurance carrier with a request for a defense and indemnity;
  • conduct a diligent search for all paper and/or electronic files, correspondence and documents which are available concerning the plaintiff victim and the alleged abuser(s); and
  • if a liability insurance policy cannot be located, determine the existence of alternative proof of liability insurance coverage – that can include a variety of options ranging from written correspondence with an insurance broker or agent to reviewing prior litigation and historical records.

Depending on whether the plaintiff/victim commenced the lawsuit anonymously or by  pseudonym, you should be able to identify potential or actual witnesses who may have information concerning the alleged abuse, the plaintiff/victim, the alleged abuser, or anything pertaining to the policies/procedures in effect at the time of the alleged abuse.  If any potential or actual witnesses are still employed, you should interview them for relevant information about the allegations of abuse.  If any potential or actual witnesses are no longer employed, determine their last known addresses to provide to counsel to be interviewed during the investigatory stage of the lawsuit.

If you are unable to locate information establishing liability insurance coverage for the time of the alleged abuse, you may have to consider establishing financial reserves to protect against any possible settlement or judgment which may be obtained against your business or organization by the survivor plaintiff.

Even though the survivor plaintiff bears the burden of proving their claims in any ASA civil lawsuit, there are actions an employer can take to maximize the effectiveness of their defense while trying to minimize the financial exposure of an influx of ASA civil lawsuits.

If you have any questions or need any assistance in the defense of a ASA civil lawsuit and/or in determining the existence of liability insurance coverage to defend and indemnify you against a ASA civil lawsuit, please contact Scott Fisher in our Litigation Practice Group at (516) 393-8248 or at

On August 8, 2022, the Governor signed a bill amending Real Property Tax Law 467(3)(a), “increasing the amount of income property owners may earn for the purpose of eligibility for the property tax exemption for persons sixty-five years of age or over and for persons with disabilities and limited income.”[i] This bill “allows municipalities to increase the maximum income eligible for New York’s real property tax exemption to $50,000 for people age 65 and over and people with disabilities. Before today, the maximum income eligible was $29,000 per year outside of New York City for seniors and people with disabilities.”[ii]

According to the bill, “New York State has a growing number of low-income seniors on fixed incomes and persons with disabilities who have limited income who are faced with ever increasing property taxes making it difficult for them to continue to live in and maintain their own homes.”[iii]  The bill was designed to lessen the burden of increasing property taxes and “allowing local governments the option to raise the maximum income eligibility limit for the Senior Citizen Real Property Tax Exemption program and the Persons with Disabilities Real Property Tax Exemption.”[iv]

“Under the new legislation, qualifying senior citizens and persons living with disabilities will be eligible to receive up to a 50% reduction on their assessment.…”[v] “This reduction is reflected in county, town, and school taxes. A reduction of 50% is the maximum allowed exemption and requires an annual household income of $50,000 or less to qualify.”[vi]  There is a sliding scale based upon the annual household income.[vii]

To date, at least one local municipality, the Town of Hempstead, has indicated that it intends to opt-in to the legislation as soon as possible.[viii] There are certain requirements to obtain this tax exemption, including that “all owners of the property must be 65 years of age or older, or if owned by husband and wife, one must be 65 years of age or older. The applicant must own the property and have owned the property for at least 12 consecutive months, or have owned a previous residence in New York State for one year prior to filing for this exemption.”[ix] Generally, in order to obtain such an exemption, a senior citizen must file an application each year.[x]

If you have any question concerning your property taxes, please feel free to contact Christopher E. Vatter, Esq. or Andrew M. Mahony, Esq. at (516) 746-8000.




[iv] Id.


[vi] Id.

[vii] Id.

[viii] Id.


[x] Id.

“This bill seeks to protect workers from corporations and their agents that fail to comply with safety protocols by amending the penal code to create new offenses and substantially increasing the fines that can be imposed upon a corporate defendant convicted of certain crimes.”[1] The bill has not yet been signed by Governor Hochul.

“Carlos’ Law is named for 22-year-old Carlos Moncayo, an Ecuadorean immigrant . . .  who was buried alive at a construction site in New York City’s meatpacking district in April 2015 while working in an unreinforced 13-feet-deep trench that had been cited by safety inspectors.”[2] “On the morning of April 6, 2015, according to the New York Times, an inspector visited the site, noticed a trench without proper earth-retaining equipment, and issued a warning. Mere hours later, the walls of the trench collapsed on Moncayo, who was pronounced dead on the scene.”[3]

The contractor “was convicted of manslaughter in the second degree, criminally negligent homicide, and reckless endangerment. . . . But despite these criminal convictions, the Moncayo family, who faced the tragic loss of a son and brother, reportedly did not receive any compensation.”[4] The Occupational Safety and Health Administration (OSHA) fined the contractor approximately $10,000 (the maximum fine possible) for this negligence.”[5]

The proposed bill would raise the maximum fine for criminal liability from $10,000 to no less than $500,000, or, in the case of a misdemeanor, no less than $300,000.[6]   The proposed bill explains that:

Workplace deaths and serious injuries continue to be commonplace in the construction industry. Of the more than 400,000 workplace fatalities since Congress, enacted the Occupational Safety and Health Act (OSH Act), fewer than 80 have been prosecuted, and only about a dozen employers have been convicted. That is roughly 1-conviction for every 33,000 fatalities. In the few cases that have resulted in conviction, the penalty was only $1,000 on average. Under the OSH Act, the criminal penalty is considered as a Class B misdemeanor, and carries, at most, up to 6 months imprisonment. The weakness of OSH’s punitive measures has therefore failed to encourage safer work environments.[7]

“This bill increases punitive measures so that corporations and their agents who ignore or fail to follow safety protocols and procedures and put workers at risk are less likely to write off serious workplace injuries as a minimal cost of doing business, and more likely to give workplace safety the serious attention it requires.”[8]

It is hopeful that this proposed bill will encourage contractors to maintain a safe construction site. The information in this article is subject to change depending on whether the proposed bill is signed by the Governor.  We will keep our readers informed with respect to any new developments.

The material in this article is meant only to provide general information and is not a substitute nor is it legal advice to you. In the event you need legal assistance, contact Christopher E. Vatter at



[3] Id.

[4] Id.

[5] Id.


[7] Id.

[8] Id.

Effective as of June 9, 2022, the Administrative Code of the City of New York[1] was amended to require that “certain businesses that supply their employees to clients for the performance of construction work or manual labor on the client’s construction site, in exchange for compensation, be licensed.”[2]

These businesses are defined as “construction labor providers”[3].  “Construction Labor Providers, also known as body shops or temp agencies, are businesses that supply temporary workers to third-party clients for non-union construction work or manual labor.”[4]  Notwithstanding, “[t]he term ‘construction’ in this bill explicitly excludes handyman work.”[5] A license is also not required for employment agencies, professional employer organizations, general contractors and subcontractors (as defined in §20-564 of the NYC Administrative Code[6]).

As explained by Commissioner Vilda Vera Mayuga of the  Department of Consumer and Worker Protection, ‘“[t]emporary construction workers are often immigrants or individuals reentering the workforce and vulnerable to mistreatment and fear retaliation for reporting abuse.”’[7] This law is designed to ensure that: ‘“businesses employing these workers are licensed, inform [Department of Consumer and Worker Protection] of their business operations, maintain records, and provide their workers with information about their rights and responsibilities, which will increase transparency and safety in the industry.”’[8]

“Applying for a license would require certain signed statements and select information on business operations, and each covered business would have to supply their workers with a series of notices: on their rights as workers covered by this bill; training and certifications the employees would need to perform their work duties; and information on the employees’ work assignments.” [9]

“Businesses that violate the bill’s subchapter would also be subject to penalties. Employees of the businesses aggrieved by a violation of the bill’s subchapter would be able to initiate a private right of action against their employers for violations of the bill, including for retaliation against employees for availing themselves of rights provided by this bill.”[10]

This is another issue to be considered when performing construction work in New York City. Jaspan Schlesinger LLP can help you navigate these issues and other construction law related matters. If you need assistance, please contact Christopher E. Vatter at


[1] 2022 N.Y.C. Local Law No. 150, N.Y.C. Admin. Code §§150-564.


[3] A construction labor provider “means a person who employs and supplies a covered construction worker to a third party client for the performance of construction work or manual labor for a construction project of such client on a site in the city, in exchange for compensation from such third party client, provided that the completion of such project is directed by such client or such client’s contractor and not such person.” (2022 N.Y.C. Local Law No. 150, N.Y.C. Admin. Code §§150-564.1).


[5]; see also NYC Administrative Code §20-564 and 28-



[8] Id.

[9]|Text|&Search= see also Construction Labor Provider License Application Checklist.,worksites%20in%20exchange%20for%20compensation.


On June 30, 2022, Governor Hochul signed legislation[1] that: “expands which documents can be used to show identity theft in certain circumstances relating to debt collection.”[2] “Under current law, a principal creditor shall cease collection activities until completion of the review of certain information submitted by a debtor who claims they were the victim of identity theft. The victim must have filed a police report alleging the identity theft; there is no alternative reporting permitted under the law.”[3]

However, not all identity theft occurs between parties that do not know each other.  Often identity theft “occurs as a result of a domestic violence or an elder abuse situation, where the perpetrator is known to the victim. Under circumstances where the victim is familiar with the perpetrator, the victim may not be able to or may not wish to pursue criminal charges.”[4] “The current law compels a victim of identity theft to report such crime to law enforcement, whether they wish to or not or whether it is safe for them to do so or not, in order for collection activities against them to be suspended as further investigation is made into the legitimacy of the debt.”[5]

Recognizing the difficulties presented where the perpetrator is known to the victim, the law now expands the types of documents which can be used to show identity theft relating to debt collection in lieu of a victim reporting the identity theft to law enforcement. Under the new law, these new documents include Federal Trade Commission and law enforcement reports, as well as criminal and family court documents which support the statement of identity theft.[6]

If you are the victim of identity theft and need assistance with the types of documents needed to report the theft of your identity, please contact Christopher E. Vatter at




[4] Id.

[5] Id.

[6] Id.

In an effort to move to fully electronic processing of trademark applications and registrations and to positively impact the environment by reducing the use of paper, effective June 7, 2022, the United States Patent and Trademark Office (USPTO) will begin issuing electronic trademark registration certificates.  The change to electronic trademark registration certificates is intended to give trademark owners easier and quicker access to their trademark certificates upon registration.

In making the transition to electronic registration certificates, the USPTO is acknowledging the strong consumer preference for the issuance of trademark registration certificates in a digital format rather than as a paper certificate.  The change will also decrease the time it takes for trademark owners to receive registration certificates.

As of June 7, 2022, trademark registration certificates will no longer be issued by the USPTO by printing them on paper and mailing them to the correspondence address of record.  Instead, the registrations will issue electronically under the electronic signature of the Director of the USPTO with a digital seal.  The digital seal will authenticate the trademark registration.  The electronic registration certificate will be uploaded to the USPTO database, with notice emailed to the trademark owner with a link to access the certificate upon issuance.  Trademark owners will be able to use the link to view, download, and print a complete copy of the registration certificate at no charge at any time.

However, trademark owners will still be able to order a “presentation” copy of the registration certificate.  The presentation copy is a one-page, condensed, printed copy of the issued registration that is suitable for framing.  The presentation copy will be printed on heavy paper; feature a gold foil seal; identify the owners; display bibliographic data, the trademark, and the classes of goods and/or services of the trademark.  There is a $25 fee for each presentation copy, which can be ordered through the USPTO’s Trademark Electronic Application System database.  For a $15 fee, trademark owners will still be able to order certified copies of their trademark registration certificates from the USPTO.  The certified copy can be used in connection with legal proceedings and certifies the trademark’s status and title and includes the signature of the authorized certifying officer.

If you have any questions or need assistance with the filing of any trademark applications or obtaining trademark registration certificates, contact the Chair of our Trademark Practice Group, Scott Fisher, at (516) 393-8248 or