What to expect:
AI, data science, machine learning, deep learning and cybersecurity talents from FrankfurtRhineMain take note!
The AI Talent Night, a networking event with a difference, offers you the opportunity to meet other talents as well as potential employers and AI experts. But that’s not all, because a great ambience, delicious food, live music, AI visualization and 3 VIP talks by leading AI experts from the business world await you.
And the best part: Are you a talent? Then you can join us for free.
Together with our partners AI FrankfurtRheinMain e.V. and STATION HQ, we are organizing the AI Talent Night as part of the UAI conference series. Our goal is to promote AI in our region and establish an AI hub in Frankfurt.
Become a part of our AI community!
Tickets for the event can be purchased here: https://pretix.eu/STATION/UAI-Talent/
or on our website:
What to expect:
You always wanted to know how AI can be used to make our world more sustainable? Then you shouldn’t miss the “Green AIdeas” after-work event.
The event offers AI companies a stage for their ideas and best practices as well as an exchange platform for inspiration, use cases and challenges around the topic of AI and sustainability.
In addition to discussion panels and the guided tour through the sustainable data center, you will have the opportunity to participate in one of the exciting AI & Sustainability talks.
One of these talks on “Sustainable Machine Learning – What impact do Machine Learning models have on the environment and what countermeasures can be taken?” will be held by our statworx AI experts Isabel Hermes and Dominique Lade.
The talk will cover topics such as:
- What machine learning has to do with sustainability
- How to identify and reduce emissions from AI applications
- How machine learning can be made more sustainable
The best part is that everything is for free.
Click here to register: https://mautic.cloudandheat.com/green-aideas-registration
We look forward to seeing you on September 8!
Why we need AI Principles
Artificial intelligence has already begun and will continue to fundamentally transform our world. Algorithms increasingly influence how we behave, think, and feel. Companies around the globe will continue to adapt AI technology and rethink their current processes and business models. Our social structures, how we work, and how we interact with each other will change with the advancements of digitalization, especially in AI.
Beyond its social and economic influence, AI also plays a significant role in one of the biggest challenges of our time: climate change. On the one hand, AI can provide instruments to tackle parts of this urgent challenge. On the other hand, the development and the implementation of AI applications will consume a lot of energy and emit massive amounts of greenhouse gases.
Risks of AI
With the advancement of a technology that has such a high impact on all areas of our lives come huge opportunities but also big risks. To give you an impression of the risks, we just picked a few examples:
- AI can be used to monitor people, for example, through facial recognition systems. Some countries are already using this technology extensively for a few years.
- AI is used in very sensitive areas where minor malfunctions could have dramatic implications. Examples are autonomous driving, robot-assisted surgery, credit scoring, recruiting candidate selection, or law enforcement.
- The Facebook and Cambridge Analytica scandal showed that data and AI technologies can be used to build psychographic profiles. These profiles allow microtargeting of individuals with customized content to influence elections. This example shows the massive power of AI technologies and its potential for abuse and manipulation.
- With recent advancements in computer vision technology, deep learning algorithms can now be used to create deepfakes. Deepfakes are realistic videos or images of people doing or saying something they never did or said. Obviously, this technology comes with enormous risks.
- Artificial intelligence solutions are often developed to improve or optimize manual processes. There will be use cases where this will lead to a replacement of human work. A challenge that cannot be ignored and needs to be addressed early.
- In the past, AI models reproduced discriminating patterns of the data they were trained on. For example, Amazon used an AI system in their recruiting process that clearly disadvantaged women.
These examples make clear that every company and every person developing AI systems should reflect very carefully on the impact the system will or might have on society, specific groups, or even individuals.
Therefore, the big challenge for us is to ensure that the AI technologies we develop help and enable people while minimizing any forms of associated risks.
Why are there no official regulations in place in 2022?
You might be asking yourself why there is no regulation in place to address this issue. The problem with new technology, especially artificial intelligence, is that it advances fast, sometimes even too fast.
Recent releases of new language models like GPT-3 or computer vision models, for example, DALLE-2, exceeded the expectations of many AI experts. The abilities and applications of AI technologies will continually advance faster than regulation can. And we are not talking about months, but years.
It is fair to say that the EU made its first attempt in this direction by proposing a regulatory framework for artificial intelligence. However, they indicate that the regulation could apply to operators in the second half of 2024 at the earliest. That is years after the above-described examples became a reality.
Our approach: statworx AI Principles
The logical consequence of this issue is that we, as a company, must address this challenge ourselves. And therefore, we are currently working on the statworx AI Principles, a set of principles that guide us when developing AI solutions.
What we have done so far and how we got here
In our task force “AI & Society”, we started to tackle this topic. First, we scanned the market and found many interesting papers but concluded that none of them could be transferred 1:1 to our business model. Often these principles or guidelines were very fuzzy or too detailed and unsuitable for a consulting company that operates in a B2B setting as a service provider. So, we decided we needed to devise a solution ourselves.
The first discussions showed four big challenges:
- On the one hand, the AI Principles must be formulated clearly and for a high-level audience so that non-experts also understand their meaning. On the other hand, they must be specific to be able to integrate them into our delivery processes.
- As a service provider, we may have limited control and decision power about some aspects of an AI solution. Therefore, we must understand what we can decide and what is beyond our control.
- Our AI Principles will only add sustainable value if we can act according to them. Therefore, we need to promote them in our projects to the customers. We recognize that budget constraints, financial targets, and other factors might work against the proper application of these principles as it will need additional time and money.
- Furthermore, what is wrong and right is not always obvious. Our discussions showed that there are many different perceptions of the right and necessary things to do. This means we will have to find common ground on which we can all agree.
Our two key take-aways
A key insight from these thoughts was that we would need two things.
As a first step, we need high-level principles that are understandable, clear, and where everyone is on board. These principles act as a guiding idea and give orientation when decisions are made. In a second step, we will use them to derive best practices or a framework that translates these principles into concrete actions during all phases of our project delivery.
The second major thing we learned, is that it is tough to undergo this process and ask these questions but also that it is inevitable for every company that develops or uses AI technology.
What comes next
So far, we are nearly at the end of the first step. We will soon communicate the statworx AI Principles through our channels. If you are currently in this process, too, we would be happy to get in touch to understand what you did and learned.
References
https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
https://www.bundesregierung.de/breg-de/themen/umgang-mit-desinformation/deep-fakes-1876736
What to expect:
Do you want to learn more about AI & Sustainability? Then you shouldn’t miss out on the “Reaching the Sustainable Development Goals with AI” event, taking place in Stuttgart on July 13.
Among other things, you will have the opportunity to participate in a workshop on “Alignment of SDGs and AI” led by our statworx AI experts Marcel Isbert and Jonas Braun.
In this 90-minute workshop, you will:
- learn about the relevance of SDGs within the AI world
- see examples for current AI products that are contributing to the 17 goals
- participate in a practical part, where you develop your own use case for your top 3 SDGs to incorporate AI into a sustainable business or sustainability into an AI business
Seats are limited, don’t forget to register!
We cannot wait to see you on July 13!
What you can expect:
On May 16, the first and largest AI event in Frankfurt am Main will take place at the Museum für Kommunikation. The initiators of the UAI event are AI Frankfurt Rhein Main e.V. and STATION. FrankfurtRheinMain.
The AI Startup & Talent Event aims to connect international and regional AI experts to expand the AI ecosystem in Germany. It will connect founders, investors, corporates, and talents in the AI field. You can look forward to our AI masterclasses and top speakers from the AI world.
Agenda Highlights:
- AI Use Case
- AI Talent Matching
- Panels & Roundtables
- Speeddating & Networking
- AI Masterclasses
- AI Speakerbox
Our AI network is looking forward to meeting you.
Get your ticket now!
The event will be held in German and English. All tickets include full access to all formats and catering at the venue.
What you can expect:
The aim of the fair is to connect students of business informatics, business mathematics and data science with companies.
We will be there with our own booth and will introduce statworx in the context of a short presentation additionally.
Given good weather, the fair will take place outside this year. Participation is free of charge and a pre-registration is not necessary. Just come by and talk to us.
What you can expect:
The konaktiva fair at Darmstadt University of Technology is one of the oldest and largest student-organized company career fairs in Germany. In line with its motto “Students meet companies”, it brings together prospective graduates and companies every year.
This year, we are also taking part with our own booth as well as several colleagues and we are looking forward to the exchange with interested students. We will be happy to present the various entry-level opportunities at statworx – from internships to permanent positions – and share insights into our day-to-day work.
In addition, there will be the opportunity to get to know us better and to discuss individual questions and cases during pre-scheduled one-on-one meetings away from the hustle and bustle of the trade fair.
Participation in the fair is free of charge for visitors.
What you can expect:
Big Data World is back. 250+ exhibitors, 250+ speakers and therefore, lots of insightful talks and information around AI & analytics. We are also part of this exciting event and look forward to your visit at our booth – M18. Here, you can experience artificial intelligence first-hand…
No matter where you are in your journey to becoming a data-driven company: There is always room for improvement. Tools and technology in BI & analytics as well as in the field of artificial intelligence are developing rapidly and so are the possibilities to exploit value potentials through innovative use cases.
Whether you are a chief data officer or data scientist, IT expert or a business user, we will be happy to answer all your questions – from strategy to implementation.
Do you still need a ticket? No problem, you can register for free via this ticket link.
We are looking forward to seeing you!
What to expect:
START Summit is Europe’s largest student-organized conference for entrepreneurship and technology. It aims to actively promote innovation by bringing together more than 5000 startups, investors, companies and young talent.
This year, we are also participating in the START Summit, which is taking place as a hybrid event for the first time in 25 years. As a sponsor of the START Hack, we are awarding AI coaching sessions to five hackathon teams who have worked on an idea in the field of AI and want to develop it further.
This event brings together founders, investors, students and companies, all in the spirit of innovation. Therefore, we are looking forward to two or three very interesting, informative and innovative days.
Here you can buy tickets for the event.
“Building trust through human-centric AI”: this is the slogan under which the European Commission presented its proposal for regulating Artificial Intelligence (AI regulation) last week. This historic step positions Europe as the first continent to uniformly regulate AI and the handling of data. With this groundbreaking attempt at regulation, Europe wishes to set standards for the use of AI and data-powered technology – even beyond European borders. That is the right step, as AI is a catalyst of the digital transformation, with significant implications for the economy, society, and the environment. Therefore, clear rules for the use of this technology are needed. This will allow Europe to position itself as a progressive market that is ready for the digital age. In its current form, however, the proposal still raises some questions about its practical implementation. Europe cannot afford to risk its digital competitiveness when competing with America and China for the AI leadership position.
Building Trust Through Transparency
Two Key Proposals for AI Regulation to Build Trust
To build trust in AI products, the proposal for AI regulation relies on two key approaches: Monitoring AI risks while cultivating an “ecosystem of AI excellence.” Specifically, the proposal includes a ban on the use of AI for manipulative and discriminatory purposes or to assess behavior through a “social scoring system”. Use cases that do not fall into these categories will still have to be screened for hazards and placed on a vague risk scale. Special requirements are placed on high-risk applications, with necessary compliance checks both before and after they are put into operation.
It is crucial that AI applications are to be assessed on a case-by-case basis instead of a previously considered sector-centric regulations. In last year’s white paper on AI and trust, the European Commission called for labeling all applications in business sectors such as healthcare or transportation as “high-risk”. This blanket classification based on defined industries, regardless of the actual use cases, would have been obstructive and meant structural disadvantages for entire European industries. The case-by-case assessment allows for the agile and innovative development of AI in all sectors and subjects all industries to the same standards for risky AI applications.
Clear Definition of Risks of an AI Application Is Missing
Despite this new approach, the proposal for AI regulation lacks a concise process to assess the risks of new applications. Since developers themselves are responsible for evaluating their applications, a clearly defined scale for risk assessment is essential. Articles 6 and 7 circumscribe various risks and give examples of “high-risk applications”, but a transparent process for assessing new AI applications is yet to be defined. Startups and smaller companies are heavily represented among AI developers. These companies, in particular, rely on clearly defined standards and processes to avoid being left behind by larger competitors with more appropriate resources. This requires practical guidelines for risk assessment.
If a use case is classified as a “high-risk application”, then various requirements on data governance and risk management must be met before the product can be launched. For example, training data must be tested for bias and inequalities. Also, the model architecture and training parameters must be documented. After deployment, human oversight of the decisions made by the model must be ensured.
Accountability for AI products is a noble and important goal. However, the practical implementation of these requirements once more remains questionable. Many modern AI systems no longer use the traditional approach of static training and testing data. Reinforcement Learning instead relies on exploratory training through feedback instead of a testable data set. And even though advances in Explainable AI are steadily shedding light on the decision-making processes of black-box models, complex model architectures of many modern neural networks make the tracing of individual decisions almost impossible to reconstruct.
The proposal also announces requirements for the accuracy of trained AI products. This poses a particular challenge for developers because no AI system has perfect accuracy. Nor is this ever the objective, as misclassifications are often calculated to have as little impact as possible on the individual use case. Therefore, it is imperative that performance requirements for predictions and classifications be determined on a case-by-case basis and that universal performance requirements be avoided.
Enabling AI Excellence
Europe is Falling Behind
With these requirements, the proposal for AI regulation seeks to inspire confidence in AI technology through transparency and accountability. This is a first, right step toward “AI excellence.” In addition to regulation, however, Europe as a location for Artificial Intelligence must also become more attractive to developers and investors.
According to a recently published study by the Center for Data Innovation, Europe is already falling behind both the United States and China in the battle for global leadership in AI. China has now surpassed Europe in the number of published studies on Artificial Intelligence and has taken the global lead. European AI companies are also attracting significantly less investment than their U.S. counterparts. European AI companies invest less money in research and development and are also less likely to be acquired than American companies.
A Step in the Right Direction: Supporting Research and Innovation
The European Commission recognizes that more support for AI development is needed for excellence on the European market and promises regulatory sandboxes, legal leeway to develop and test innovative AI products, and co-funding for AI research and testing sites. This is needed to make startups and smaller companies more competitive and foster European innovation and competition.
These are necessary steps to lift Europe onto the path to AI excellence, but they are far from being sufficient. AI developers need easier access to markets outside the EU, facilitating the flow of data across national borders. Opportunities to expand into the U.S. and collaborate with Silicon Valley are essential for the digital industry due to how interconnected digital products and services have become.
What is entirely missing from the proposal for AI regulation is education about AI and its potential and risks outside of expert circles. As artificial intelligence increasingly permeates all areas of everyday life, education will become more and more critical. To build trust in new technologies, they must first be understood. Educating non-specialists about both the potential and limitations of AI is an essential step in demystifying Artificial Intelligence and strengthening trust in this technology.
Potential Not Yet Fully Tapped
With this proposal, the European Commission recognizes that AI is leading the way for the future of the European market. Guidelines for a technology of this scope are important – as is the promotion of innovation. For these strategies to bear fruit, their practical implementation must also be feasible for startups and smaller companies. The potential for AI excellence is abundant in Europe. With clear rules and incentives, it can also be realized.