Computers Windows Internet

1c too complex request, possibly stack overflow. Stack overflow. Data encryption during transmission between the application server and MS SQL Server

This article once again demonstrates that any set of security measures should cover all stages of implementation: development, deployment, system administration and, of course, organizational measures. In information systems, it is the "human factor" (including users) that is the main security threat. This set of measures must be reasonable and balanced: it makes no sense and it is unlikely that sufficient funds will be allocated to organize protection, which exceeds the cost of the data itself.

Introduction

1C: Enterprise is the most widespread accounting system in Russia, but despite this, until version 8.0 its developers paid very little attention to security issues. Basically, of course, this was dictated by the price niche of the product and the focus on small businesses where there are no qualified IT specialists, and the possible cost of deploying and maintaining a secure system would be prohibitively expensive for the enterprise. With the release of version 8.0, the focus should have changed: the cost of solutions has increased significantly, the system has become much more scalable and flexible - the requirements have changed significantly. Whether the system has become sufficiently reliable and secure is a very individual question. The main information system of a modern enterprise must meet at least the following security requirements:

  • A fairly low probability of system failure due to internal reasons.
  • Reliable user authorization and data protection from incorrect actions.
  • Effective system for assigning user rights.
  • Online backup and recovery system in case of failure.

Do solutions based on 1C: Enterprise 8.0 satisfy such requirements? There is no single answer. Despite significant changes in the access control system, there are still a lot of unresolved issues. Depending on how the system is developed and configured, all these requirements may not be met or are met to a sufficient extent for this implementation, however, it is worth paying attention (and this is a significant consequence of the "youth" of the platform) that in order to fully fulfill the listed conditions, one has to apply truly titanic efforts.

This article is intended for developers and implementers of solutions on the 1C: Enterprise platform, as well as system administrators of organizations where 1C: Enterprise is used, and describes some aspects of the development and configuration of the client-server version of the system from the point of view of the organization. information security... This article cannot be used as a replacement for the documentation, but only indicates some points that have not yet been reflected in it. And, of course, neither this article nor all the documentation will be able to reflect the complexity of the problem of building a secure information system, which must simultaneously meet the conflicting requirements of security, performance, convenience and functionality.

Classification and terminology

The key subject of this article is information threats.

Information threat- the possibility of a situation where the data will be unauthorized read, copied, modified or blocked.

And, based on this definition, the article classifies information threats as follows:

  • Unauthorized destruction of data
  • Unauthorized modification of data
  • Unauthorized copying of data
  • Unauthorized reading of data
  • Inaccessibility of data

All threats are divided into intentional and unintentional. The implemented information threat will be called incident... The features of the system are:

Vulnerabilities- features leading to incidents Protective measures- features that block the possibility of an incident

Basically, only those cases are considered, the probability of which is due to the use of the technological platform 1C: Enterprise 8.0 in the client-server version (hereinafter, in those cases when this does not contradict the meaning of just 1C or 1C 8.0). Let's define the following main roles in relation to the use of the system:

  • Operators- users who have rights to view and modify data limited to the application role, but do not have administrative functions
  • System Administrators- users who have administrative rights in the system, including administrative rights in the operating systems of the application server and the MS SQL server, administrative rights to MS SQL, etc.
  • Information security administrators- users who are delegated certain administrative functions in the 1C infobase (such as adding users, testing and fixing, backup, setting up an applied solution, etc.)
  • System developers- users developing an applied solution. In general, they may not have access to the working system.
  • Persons without direct access to the system- users who are not delegated the rights to access 1C, but who can in one way or another affect the operation of the system (usually these are all users of the same Active Directory domain in which the system is installed). This category is considered primarily to identify potentially dangerous subjects in the system.
  • Automated administrative scripts- programs to which certain functions are delegated, designed to automatically perform certain actions (for example, import-export of data)

Two points should be noted here: firstly, this classification is very rough and does not take into account divisions within each group - such division will be created for some specific cases, and secondly, it is assumed that other persons cannot influence the operation of the system, which must be provided with means external to 1C.

Any security system must be designed with appropriateness and cost of ownership in mind. In general, when developing and implementing an information system, it is necessary that the price of protecting the system corresponds to:

  • the value of the information being protected;
  • the cost of creating an incident (in the case of a deliberate threat);
  • financial risks in the event of an incident

It is senseless and harmful to organize protection that is much more expensive than assessing its financial effectiveness. There are several methods for assessing the risks of information loss, but they are not considered in this article. Another important aspect is maintaining a balance of often conflicting requirements for information security, system performance, convenience and simplicity of working with the system, speed of development and implementation and other requirements for information systems of enterprises.

The main features of the information security mechanism of the system

1C: Enterprise 8.0 comes in two versions: file and client-server. The file version cannot be considered as ensuring the information security of the system for the following reasons:

  • Data and configuration are stored in a file that is readable and writable by all users of the system.
  • As will be shown below, system authorization is very easy to bypass.
  • The integrity of the system is provided only by the core of the client part.

In the client-server version, MS SQL Server is used to store information, which provides:

  • More reliable data storage.
  • Isolation of files from direct access.
  • Better transaction and locking mechanisms.

Despite the significant differences between the file and client-server versions of the system, they have a single access control scheme at the application solution level, which provides the following capabilities:

  • User authorization using the password specified in 1C.
  • User authorization by current Windows user.
  • Assigning roles to system users.
  • Restricting the execution of administrative functions by role.
  • Assigning available interfaces by role.
  • Restricting access to metadata objects by role.
  • Restricting access to object attributes by role.
  • Restricting access to data objects by roles and session parameters.
  • Restricting interactive access to data and executable modules.
  • Some restrictions on code execution.

In general, the used data access scheme is quite typical for information systems of this level. However, in relation to this implementation of the three-tier client-server architecture, there are several fundamental aspects that lead to a relatively large number of vulnerabilities:

  1. A large number of stages of data processing, and at each stage, different rules for accessing objects may apply.

    A somewhat simplified diagram of the data processing steps that are significant from a security point of view is shown in Fig. 1. The general rule for 1C is to reduce restrictions as you move down this scheme, therefore, exploiting a vulnerability on one of upper levels can disrupt the operation of the system at all levels.

  2. Insufficiently debugged procedures for controlling the transmitted data when moving from level to level.

    Unfortunately, not all internal mechanisms of the system are ideally debugged, especially for non-interactive mechanisms, the debugging of which is always more time consuming, on the one hand, but more responsible on the other. This "disease" is not a problem exclusively for 1C, it is found in many server products of most vendors. Only in recent years has attention to these problems increased significantly.

  3. Insufficiently high average qualifications of developers and system administrators inherited from the previous version.

    Products of the 1C: Enterprise line were initially focused on ease of development and support and to work in small organizations, so it is not surprising that historically, a significant part of the "developers" of application solutions and "administrators" of systems do not have sufficient knowledge and skills to work with a much more complex product, which is version 8.0. The problem is aggravated by the practice adopted in franchisee companies to train "in battle" at the expense of clients, without approaching this issue systematically. It is necessary to pay tribute to 1C that over the past few years this situation has been gradually improving: serious franchisee companies have become more responsible approach to the problem of recruiting and training personnel, the level of information technology support from 1C has increased significantly, certification programs have appeared focused on high level of service; however, the situation cannot be immediately corrected, so this factor should be taken into account when analyzing the security of the system.

  4. Comparatively small age of the platform.

    Among the products with a similar focus and purpose of use, this is one of the youngest solutions. The platform's functionality was more or less settled less than a year ago. At the same time, each release of the platform, starting in 8.0.10 (it was in this release that almost all the current capabilities of the system were implemented) became much more stable than the previous ones. The functionality of typical application solutions is still growing by leaps and bounds, although half of the platform's capabilities are used at most. Of course, in such conditions, we can talk about stability rather tentatively, but on the whole, it must be admitted that in many respects solutions based on the 1C 8.0 platform significantly outperform similar solutions on the 1C 7.7 platform in terms of functionality and performance (and often also in terms of stability).

So, the system (and, possibly, a typical application solution) is deployed at the enterprise and installed on computers. First of all, it is necessary to create an environment in which it makes sense to configure 1C security, and for this it must be configured in such a way that the assumption that the system settings are significantly affected by the system settings is fulfilled.

Follow the general rules for setting up security.

There can be no question of any information security of the system if the basic principles of creating secure systems are not followed. Be sure to make sure that at least the following conditions are met:

  • Access to the servers is physically limited and their uninterrupted operation is ensured:
    • server equipment meets the requirements of reliability, replacement of faulty server equipment is debugged, for particularly critical areas, redundant circuits are used hardware(RAID, power supply from multiple sources, multiple communication channels, etc.);
    • servers are located in a locked room, and this room is opened only for the duration of work that cannot be performed remotely;
    • only one or two people have the right to open the server room; in case of emergency, a notification system for responsible persons has been developed;
    • uninterrupted power supply of servers is provided
    • the normal climatic mode of operation of the equipment is ensured;
    • there is a fire alarm in the server room, there is no possibility of flooding (especially for the first and last floors);
  • The settings of the network and information infrastructure of the enterprise are correct:
    • all servers have firewalls installed and configured;
    • all users and computers are authorized on the network, passwords are complex enough to be impossible to guess;
    • the system operators have sufficient rights to work normally with it, but they do not have the rights to administrative actions;
    • anti-virus tools are installed and enabled on all computers in the network;
    • it is desirable that users (except for network administrators) do not have administrative rights on client workstations;
    • access to the Internet and to removable media should be regulated and limited;
    • system audit of security events must be configured;
  • The main organizational issues have been resolved:
    • users have sufficient qualifications to work with 1C and hardware;
    • users are notified of responsibility for violation of operating rules;
    • appointed financially responsible for each material element of the information system;
    • all system blocks sealed and closed;
    • Pay particular attention to instructing and supervising cleaners, builders, and electricians. These persons can, through negligence, cause damage that is not comparable to the deliberate harm caused by an unscrupulous user of the system.

Attention! This list is not exhaustive, but only describes what is often overlooked when deploying any fairly complex and expensive information system!

  • MS SQL Server, application server and client side work on different computers, server applications work under the rights of specially created Windows users;
  • For MS SQL Server
    • mixed authorization mode is set
    • MS SQL users included in the serveradmin role do not participate in the work of 1C,
    • for each IS 1C, a separate MS SQL user has been created who does not have privileged access to the server,
    • the MS SQL user of one information security does not have access to other information security;
  • Users do not have direct access to Application Server and MS SQL Server files
  • Operator workplaces are equipped with Windows 2000 / XP (not Windows 95/98 / Me)

Do not neglect the recommendations of the system developers and reading the documentation. On ITS disks in the "Methodological Recommendations" section, important materials on system tuning are published. Pay particular attention to the following articles:

  1. Features of application work with the 1C: Enterprise server
  2. Data placement 1C: Enterprise 8.0
  3. Updating 1C: Enterprise 8.0 by users Microsoft Windows without administrator rights
  4. Editing the list of users on behalf of a user who does not have administrative rights
  5. Configuring Windows XP SP2 Firewall Settings to Run SQL Server 2000 and SQL Server Desktop Engine (MSDE)
  6. Configuring COM + Windows XP SP2 parameters for 1C: Enterprise 8.0 server operation
  7. Configuring Windows XP SP2 firewall parameters for 1C: Enterprise 8.0 server operation
  8. Configuring Windows XP SP2 Firewall Settings for the HASP License Manager
  9. Creating a backup copy of an infobase using SQL Server 2000
  10. Questions of installation and configuration of 1C: Enterprise 8.0 in the "client-server" option(one of the most important articles)
  11. Peculiarities Windows settings Server 2003 when installing 1C: Enterprise 8.0 server
  12. Regulation of user access to the infobase in the client-server version(one of the most important articles)
  13. Server 1C: Enterprise and SQL Server
  14. Detailed procedure for installing 1C: Enterprise 8.0 in the "client-server" version(one of the most important articles)
  15. Using the built-in language on the 1C: Enterprise server

But, reading the documentation, be critical of the information received, for example, in the article "Questions of installing and configuring 1C: Enterprise 8.0 in the" client-server "option", the rights that are required for the USER1CV8SERVER user are not quite accurately described. There will be links to the list below, so, for example [ITS1] means the article "Features of the work of applications with the 1C: Enterprise server". All links to articles are given as of the latest ITS issue at the time of writing (January 2006)

Use authorization capabilities for users combined with Windows authorization

Of the two possible user authorization modes: built-in 1C and combined with Windows OS authorization - if possible, you should choose combined authorization. This will allow users not to be confused with multiple passwords while working, but it will not lower the security level of the system. However, even for users who use only Windows authorization, it is highly desirable to set a password when creating, and only after that disable 1C authorization for given user... To ensure system recovery in case of destruction of the Active Directory structure, it is necessary to leave at least one user who can enter the system using 1C authorization.

When creating application solution roles, do not add "in reserve" rights

Each role of the application solution must reflect the minimum required set of rights to perform the actions defined by this role. However, some roles may not be used on their own. For example, for interactive launch external treatments you can create a separate role and add it to all users who need to use external processing.

Review logs and system logs regularly

Whenever possible, regulate and automate the viewing of logs and system operation protocols. With proper configuration and regular review of logs (filtering only by important events), unauthorized actions can be detected early or even prevented during the preparation phase.

Some features of the client-server version

This section describes some of the features of the client-server version and their impact on security. For better readability, the following designations are accepted:

Attention! vulnerability description

Storing information that controls access to the system

Storing a list of information security users

All information about the list of users of this IS and the roles available to them in it is stored in the Params table in the MS SQL database (see [ITS2]). Looking at the structure and contents of this table, it becomes obvious that all user information is stored in a record, with the FileName field value "users.usr".

Since we assume that users do not have access to the MS SQL database, this fact in itself cannot be used by an attacker, however, if it is possible to execute the code in MS SQL, this "opens the door" to obtain any (!) Access from 1C ... The same mechanism (with minor changes) can be used for the file version of the system, which, given the peculiarities of the file version, completely excludes its applicability in building secure systems.

Recommendation: Currently, there is no way to completely protect the application from such a change, except for the use of triggers at the MS SQL Server level, which, on the other hand, can cause problems when updating the platform version or changing the list of users. To track such changes, you can use the 1C logs (paying attention to "suspicious" logins in the configurator mode without specifying a user) or keep SQL Profiler constantly running (which will have an extremely negative effect on system performance) or configure the Alerts mechanism (most likely, together using triggers)

Storing information about the IS list on the server

For each 1C application server, information about the list of MS SQL databases connected to it is stored. Each infobase uses its own connection string between the application server and the MS SQL server. Information about the infobases registered on the application server along with connection strings is stored in the srvrib.lst file, which is located on the server in the directory<Общие данные приложений>/ 1C / 1Cv8 (for example, C: / Documents and Settings / All Users / Application Data / 1C / 1Cv8 / srvrib.lst). For each information security, a complete connection string is stored, including the MS SQL user password when using the mixed MS SQL authorization model. It is the presence of this file that makes it possible to fear unauthorized access to the MS SQL database, and if, contrary to the recommendations, a privileged user (for example, "sa") is used to access at least one database, then in addition to the threat of one IS, the entire system using MS SQL is threatened.

It is interesting to note that the use of mixed authorization and Windows authorization on the MS SQL server leads to different types of problems when accessing this file. So the key negative properties of Windows authorization will be:

  • The operation of all information security on the application server and on the MS SQL server under one set of rights (most likely redundant)
  • From the 1C application server process (or, in general, from the USER1CV8SERVER user or its analogue) without specifying a password, you can easily connect to any information security without specifying a password

On the other hand, it might be more difficult for an attacker to execute arbitrary code from the user's context USER1CV8SERVER than retrieving the specified file. By the way, the presence of such a file is another argument for the separation of server functions on different computers.

Recommendation: The srvrib.lst file should only be accessible to the server process. Be sure to set up an audit to change this file.

Unfortunately, by default this file is almost completely unreadable, which must be taken into account when deploying the system. Ideally, the application server would prevent reading and writing of this file during operation (including reading and writing by user connections executing on this server).

Lack of authorization when creating information security on the server

Attention! The authorization error was fixed in release 8.0.14 of the 1C: Enterprise platform. This release introduces the concept of "1C: Enterprise Server Administrator", but while the list of administrators is specified on the server, the system operates as described below, so do not forget about this possible feature.

Probably the greatest vulnerability from this section is the ability to add almost unlimited information security to the application server, as a result of which any user who has access to the connection to the application server automatically gets the opportunity to run arbitrary code on the application server. Let's look at an example.

The system must be installed in the following variant

  • MS SQL Server 2000 (e.g. network name SRV1)
  • Server 1C: Enterprise 8.0 (network name SRV2)
  • Client part of 1C: Enterprise 8.0 (network name WS)

It is assumed that the user (hereinafter USER) working on WS has at least minimal access to one of the IBs registered on SRV2, but does not have privileged access to SRV1 and SRV2. In general, the combination of functions by the listed computers does not affect the situation. The system was configured taking into account the recommendations in the documentation and on the ITS disks. The situation is reflected in Fig. 2.


  • configure COM + security on the application server so that only 1C users have the right to connect to the application server process (for more details [ITS12]);
  • the srvrib.lst file must be read-only for the USER1CV8SERVER user (temporarily enable writing to add a new IB to the server);
  • to connect to MS SQL, use only the TCP / IP protocol, in this case you can:
    • restrict connections using a firewall;
    • configure the use of a non-standard TCP port, which will complicate the connection of "outsiders" IS 1C;
    • use encryption of transmitted data between the application server and the SQL server;
  • configure the server firewall so that it is impossible to use third-party MS SQL servers;
  • use intranet security tools to exclude the possibility of an unauthorized computer appearing in local network(IPSec, group security policies, firewalls, etc.);
  • under no circumstances grant USER1CV8SERVER administrative rights on the application server.

Using code running on the server

When using the client-server version of 1C, the developer can distribute the code execution between the client and the application server. In order for the code (procedure or function) to be executed only on the server, it is necessary to place it in a common module for which the "Server" property is set and, in the case when the execution of the module is allowed not only on the server, place the code in a section limited "# If Server ":

# If Server Then
OnServer function (Param1, Param2 = 0) Export // This function, despite its simplicity, is executed on the server
Param1 = Param1 + 12;
Return of Param1;
EndFunction
# EndIf

When using code that runs on the server, keep in mind that:

  • the code is executed with USER1CV8SERVER rights on the application server (COM objects and server files are available);
  • all user sessions are performed by one instance of the service, therefore, for example, a stack overflow on the server will cause all active users to be disconnected;
  • debugging server modules is difficult (for example, you cannot set a breakpoint in the debugger), but must be done;
  • transfer of control from the client to the application server and vice versa may require significant resources with large volumes of transmitted parameters;
  • use of interactive tools (forms, spreadsheet documents, dialog boxes), external reports, and code processing on the application server is not possible;
  • the use of global variables (application module variables declared with the "Export" indication) is not allowed;

For more details see [ITS15] and other articles of ITS.

The application server must have special reliability requirements. In a properly built client-server system, the following conditions must be met:

  • no actions of the client application should interrupt the server (except for administrative cases);
  • the server cannot run program code received from the client;
  • resources must be "fairly" distributed across client connections, ensuring the availability of the server regardless of the current load;
  • in the absence of data locks, client connections should not affect each other's work;
  • there is no user interface on the server, but monitoring and logging tools should be developed;

In general, the 1C system is built in such a way as to approach these requirements (for example, it is impossible to force external processing to be performed on the server), but there are still several unpleasant features, therefore:

Recommendation: It is recommended to adhere to the principle of the minimality of the interface when developing the server side of the execution. Those. the number of entries to server modules from the client application should be very limited, and the parameters should be strictly regulated. Recommendation: When receiving the parameters of procedures and functions on the server, it is necessary to validate the parameters (check whether the parameters correspond to the expected type and range of values). This is not done in standard solutions, but it is very desirable to introduce mandatory validation in your own designs. Recommendation: When generating request text (and even more so the Run command parameter) on the server side, do not use the strings received from the client application.

The general recommendation would be to familiarize yourself with the principles of building secure web- database applications and work on similar principles. The similarity is really considerable: firstly, like a web application, the application server is an intermediate layer between the database and the user interface (the main difference is that the web server forms the user interface); secondly, from the point of view of security, the data received from the client cannot be trusted, since it is possible to launch external reports and processing.

Passing parameters

Passing parameters to a function (procedure) executed on the server is a rather delicate issue. This is primarily due to the need to transfer them between the application server process and the client. When control is transferred from the client-side to the server-side, all transmitted parameters are serialized, transferred to the server, where they are "unpacked" and used. When moving from the server side to the client side, the process is reversed. It should be noted here that this scheme correctly handles passing parameters by reference and by value. When transferring parameters, the following restrictions apply:

  • Only non-mutable values ​​(that is, the values ​​of which cannot change) can be transferred between the client and the server (in both directions): primitive types, references, universal collections, system enumeration values, value storage. If you try to pass something else, the client application crashes (even if the server tries to send an incorrect parameter).
  • When passing parameters, it is not recommended to transfer large amounts of data (for example, strings of more than 1 million characters), this may negatively affect server performance.
  • You cannot pass parameters containing a circular reference, both from the server to the client, and vice versa. When an attempt is made to pass such a parameter, the client application crashes (even if the server tries to pass an incorrect parameter).
  • It is not recommended to transfer very complex collections of data. When trying to pass a parameter with a very large nesting level, the server crashes (!).

Attention! The most unpleasant feature at the moment is probably the error of passing complex collections of values. So, for example, the code: Nesting Level = 1250;
M = New Array;
TransmittedParameter = M;
For MF = 1 By Nesting Level Cycle
MVint = New Array;
M. Add (MVintr);
M = MVintr;
End of Cycle;
ServerFunction (PassedParameter);

Causes the server to crash with all users disconnected, and this happens before control is transferred to the embedded language code.

Using unsafe functions on the server side.

Not all embedded language features can be used in code running on the application server, but even among the available tooling there are many "problem" constructs that can be roughly classified as follows:

  • able to provide the ability to execute code not contained in the configuration ("Code Execution" group)
  • capable of providing the client application with information about the user's file and operating system or performing actions not related to working with data ("Violation of rights")
  • capable of causing an emergency stop of the server or using very large resources ("Server Failure" group)
  • capable of causing a failure in the client's work (the "Client Failure" group) - this type is not considered. Example: passing a mutable value to the server.
  • errors of programming algorithms (infinite loops, unlimited recursion, etc.) ("Programming errors")

The main problem constructs known to me (with examples) are listed below:

Procedure Execute (<Строка>)

Code execution. Executes a piece of code that is passed to it as a string value. When used on the server, you must make sure that the data received from the client is not used as a parameter. For example, the following usage is invalid:

# If Server Then
Procedure On Server (Param1) Export
Execute (Param1);
End of Procedure
#EndIf

Type "COMObject" (constructor New COMObject (<Имя>, <Имя сервера>))

Creates an external application COM object with USER1CV8SERVER privileges on the application server (or other specified computer). When used on a server, make sure that no parameters are passed from the client application. However, on the server side, it is effective to use this feature when importing / exporting, sending data over the Internet, implementing non-standard functions, etc.

Function GetCOMObject (<Имя файла>, <Имя класса COM>)
Violation of rights and code execution. Similar to the previous one, only getting the COM object corresponding to the file.
Procedures and Functions ComputerName (), TempFilesDirectory () ,ProgramsDirectory (), Windows Users ()
Rights violation. Allow, having executed them on the server, to find out the details of the organization of the server subsystem. When used on a server, make sure that the data is either not transmitted to the client, or is not available to operators without appropriate authorization. Pay special attention to the fact that data can be passed back in a parameter passed by reference.
Procedures and functions for working with files (CopyFile, FindFiles, CombineFiles, and many others), as well as the "File" types.

Rights violation. Allow, by executing them on the server, to gain general access to local (and located on the network) files accessible under the USER1CV8SERVER user rights. If used deliberately, it is possible to effectively implement tasks such as import / export of data on the server.

Be sure to check the 1C user rights before using these functions. To check user rights, you can use the following construction in the server module:

# If Server Then
Procedure ExecuteWork with File () Export
RoleAdministrator = Metadata.Role.Administrator;
User = SessionParameters.CurrentUser;
If User.Roles.Contains (RoleAdministrator) Then
// This is where the code for working with files is executed
EndIf;
# EndIf

Be sure to check the parameters if you use these procedures and functions, otherwise there is a risk of accidentally or deliberately causing irreparable harm to the 1C application server, for example, when executing the code on the server:

Path = "C: \ Documents and Settings \ All Users \ Application Data \ 1C \ 1Cv8 \";
MoveFile (Path + "srvrib.lst", Path + "Here'sWhereFile");

After executing such code on the server, if the USER1CV8SERVER user has permission to change it, as described above, and after restarting the server process (by default, 3 minutes after all users log out), a BIG question will arise about starting the server. But it is possible and complete removal files ...

The types "XBase", "BinaryData", "Read XML", "Write XML", "Transform XSL", "Write ZipFile", "ReadZipFile", "ReadText", "WriteText"
Rights violation. They allow, by executing them on the server, to access local (and located on the network) files of certain types and to read / write them under the rights of the USER1CV8SERVER user. If used deliberately, it is possible to effectively implement such tasks as import / export of data on the server, logging of some functions, and solving administrative tasks. In general, the recommendations are the same as the previous paragraph, but you should take into account the possibility of transferring data of these files (but not objects of all these types) between the client and server parts.
System Information type
Rights violation. Allows, if data is incorrectly used and transferred to the client part of the application, you can get data about the application server. It is advisable to restrict the right to use when using.
The types "InternetConnection", "InternetMail", "InternetProxy", "HTTPConnection", "FTPConnection"

Rights violation. When used on the server, it connects to a remote PC from the application server under USER1CV8SERVER rights. Recommendations:

  • Controlling parameters when calling methods.
  • Control of user rights 1C.
  • Severe restrictions on the rights of the user USER1CV8SERVER to access the network.
  • Correct configuration of the firewall on the 1C application server.

When used correctly, it is convenient to organize, for example, sending e-mail from an application server.

Types "InformationBaseUserManager", "InformationBaseUser"

Rights violation. In case of incorrect use (in a privileged module), it is possible to add users or change the authorization parameters of existing users.

Function Format

Server crash. Yes! This seemingly harmless function, if you do not control its parameters and execute it on the server, can cause an abnormal termination of the server application. The error manifests itself when formatting numbers and using the output mode of leading zeros and a large number of characters, for example

Format (1, "CHT = 999; CHVN =");

I hope this error will be fixed in the next platform releases, but for now, in all calls to this function that can be executed on the server, check the call parameters.

Procedures and functions for storing values ​​(ValueInStringInternally, ValueInFile)
Server crash. These functions do not handle circular references in collections and very deep nesting, so they can crash in some very special cases.

Errors of boundary and special values ​​of parameters in functions. Execution control.

One of the problems that can be encountered when using the server is the large "responsibility" of the server functions (the possibility of an abnormal termination of the entire server application due to an error in one connection and the use of one "resource space" for all connections). Hence the need to control the main parameters of the runtime:

  • For built-in language functions, check their launch parameters (a prime example is the "Format" function)
  • When using loops, make sure that the loop exit condition is triggered. If the loop is potentially infinite, artificially limit the number of iterations: MaximumIterationCount = 1,000,000;
    Iteration Count = 1;
    Bye
    A Function That Can't Return A False Value ()
    AND (Iteration Counter<МаксимальноеЗначениеСчетчикаИтераций) Цикл

    // .... body of the loop
    Iteration Counter = Iteration Counter + 1;
    End of Cycle;
    IfIterationCount> MaximumIterationCounterValue Then
    // .... handle the event of excessively long cycle execution
    EndIf;

  • When using recursion, limit the maximum nesting level.
  • When forming and executing queries, try to prevent very long selections and selections of a large amount of information (for example, when using the "IN HIERARCHY" condition, do not use an empty value)
  • When designing an infobase, provide a sufficiently large margin of digit capacity for numbers (otherwise addition and multiplication becomes non-commutative and non-associative, which makes debugging difficult)
  • In executable requests, check the logic of work for the presence NULL values and correct work conditions and query expressions using NULL.
  • When using collections, control the ability to transfer them between the application server and the client.

Using terminal access to the client side to restrict access

It is not uncommon to find recommendations to use terminal access to restrict access to data and increase performance by executing client-side code on a terminal server. Yes, if properly configured, the use of terminal access can indeed increase the overall level of system security, but, unfortunately, it is often possible to meet with the fact that in practical use the security of the system only decreases. Let's try to figure out what this is connected with. Now there are two common means of organizing terminal access, these are Microsoft Terminal Services (RDP protocol) and Citrix Metaframe Server (ICA protocol). In general, Citrix tools provide much more flexible access administration options, but the cost of these solutions is significantly higher. We will only consider the basic features common to both protocols that can lower the overall level of security. There are only three main dangers when using terminal access:
  • The ability to block the work of other users by seizing an excessive amount of resources
  • Access to data of other users.
  • Unauthorized copying of data from a terminal server to a user's computer

In any case, terminal services allow you to:

  • Increase the reliability of work (in case of a failure on the terminal computer, the user can subsequently continue to work from the same place)
  • Restrict access to the client application and the files it stores.
  • Transfer the computing load from the user's workplace to the terminal access server
  • Manage system settings more centrally. For users, the saved settings will be valid regardless of the computer from which they logged into the system.
  • In some cases, you can use a terminal solution for remote access to the system.

It is necessary to limit the number of possible connections to the terminal server of one user

Due to the "gluttony" of the 1C client application regarding resources, it is imperative to limit the maximum number of simultaneous connections of one user (operator) to the terminal server. An actively used connection can use up to 300 MB of memory with only one instance of the application. In addition to memory, processor time is actively used, which also does not contribute to the stability of the users of this server. Along with preventing overuse of server resources, this limitation can prevent someone else's account from being used. Implemented by the standard settings of the terminal server.

You must not allow more than one or two 1C client applications to run simultaneously in one connection

Dictated by the same reasons as in the previous paragraph, but technically more difficult to implement. The problem is that it is almost impossible to prevent the restart of 1C by means of a terminal server (it will be explained below why), so you have to implement this feature at the level of an applied solution (which is also not a good solution, since there may be some "hanging" sessions in case of incorrect termination of the application, it becomes necessary to refine the applied solution in the application module and some reference books, which will complicate the use of updates from 1C). It is highly desirable to leave the user the ability to run 2 applications to be able to launch some actions (for example, generating reports) in background- the client application, unfortunately, is actually single-threaded.

It is not recommended to grant access rights to the terminal server to users who have the right to launch resource-intensive computing tasks in 1C or to prevent such launching while other users are actively working.

Of course, it is better to leave access to the terminal server only to users who do not use such tasks as data mining, geographic schemes, import / export, and other tasks that seriously load the client side of the application. If, nevertheless, there is a need to resolve such tasks, then it is necessary: ​​to notify the user that these tasks may affect the performance of other users, record the event of the beginning and end of such a process in the log, allow execution only at a scheduled time, etc.

It is necessary to make sure that each user can write only to strictly defined directories of the terminal server and other users do not have access to them.

First, if you do not restrict the ability to write to shared directories (such as the directory where 1C is installed), then an attacker can change the behavior of the program for all users. Secondly, the data of one user (temporary files, files for saving report settings, etc.) should in no case be available to another user of the terminal server - in general, this rule is fulfilled during normal settings. Thirdly, the attacker still has the opportunity to "litter" the partition so that there is no space left on the hard disk. I know, they will object to me that in Windows OS, starting from Windows 2000, there is a quota mechanism, but this is a rather expensive mechanism, and I have hardly seen any real use of it.

If the previous questions of setting up access were generally quite easy to implement, then already such a (seemingly) simple task as regulating user access to files is implemented non-trivially. First, if the quota mechanism is not used, then large files can be saved. Secondly, the system is built in such a way that it will almost always be possible to save the file so that it will be available to another user.

Considering that the task is completely difficult to solve, it is recommended to audit most file events

It is necessary to disable the connection (mapping) of disk devices, printers and the clipboard of the client workstation.

In RDP and ICA it is possible to organize automatic connection of disks, printers, clipboard com-ports of the terminal computer to the server. If there is this possibility, then it is almost impossible to prohibit the launch of extraneous code on the terminal server and saving data from 1C on the terminal access client. Allow these features only for individuals with administrative rights.

Network file access from the terminal server must be restricted.

If this is not done, then the user can again run unwanted code or save data. Since the regular log does not track file events (by the way, it is a good idea for implementation by the platform developers), and it is almost impossible to configure system audit throughout the network (there will not be enough resources to service it), it is better that the user can send data or print, or by email. Pay special attention that the terminal server does not work directly with removable media of users.

Under no circumstances should you leave the application server on the terminal server when creating a secure system.

If the application server is launched on the same computer as the client applications, then there are many opportunities for disrupting its normal operation. If for some reason it is impossible to separate the functions of the terminal server and the application server, then pay special attention to user access to files used by the application server.

It is necessary to exclude the possibility of launching all applications except 1C: Enterprise on the terminal server.

This is one of the most difficult to implement items of wishes. To begin with, you need to properly configure the Group Security Policy in the domain. All Administrative Templates and Software Restriction Policies must be configured correctly. In order to test yourself, make sure that at least the following features are blocked:

The complexity of the implementation of this requirement often leads to the possibility of launching an "extra" 1C session on the terminal server (even if other applications are limited, it is impossible in principle to prohibit the launch of 1C by means of Windows).

Consider the limitations of the regular logbook (all users use the program from one computer)

Obviously, since users open 1C in terminal mode, then it will be the terminal server that will be recorded in the registration log. From which computer the user connected, the log does not report.

Terminal Server - Security or Vulnerability?

So, after considering the main features of the terminal north, we can say that potentially the terminal north can help in automation for distributing the computing load, but building a secure system is quite difficult. One of the cases when the use of a terminal server is most effective is to launch 1C without Windows Explorer in full screen mode for users with limited functionality and a specialized interface.

Client side work

Using Internet Explorer (IE)

One of the conditions for the normal operation of the 1C client part is the use of Internet Explorer components. You have to be very careful with these components.

Attention! Firstly, if a spyware or adware module is attached to IE, then it will be loaded even if you view any HTML files in 1C. So far, I have not seen a conscious use of this feature, but I have met in one of the organizations a loaded "spy" module of one of the pornographic networks when 1C was running (the antivirus program was not updated, the symptoms of which were found: when setting up the firewall, it was clear that 1C was trying on port 80 connect to a porn site). Actually, this is another argument in favor of the fact that protection should be comprehensive.

Attention! Secondly, the 1C system allows the use of flash movies, ActiveX objects, VBScript in the displayed HTML documents, sending data to the Internet, even opening PDF files (!), Although in the latter case it asks "to open or save" ... in general, whatever your heart desires. An example of a not entirely sensible use of the built-in HTML viewing and editing capabilities:

  • Create a new HTML document (File -> New -> HTML Document).
  • Go to the "Text" tab of the blank document.
  • Delete the text (completely).
  • Go to the "View" tab of this document
  • Using drag-n-drop, move a file with the SWF extension (these are flash movie files) from the open explorer to the document window, for example, from the browser cache, although you can also use a FLASH toy for fun.
  • How lovely! You can run a toy on 1C!

From a system security point of view, this is completely wrong. So far, I have not seen special attacks on 1C through this vulnerability, but most likely it will turn out to be a matter of time and the value of your information.

There are some more minor points that appear when working with an HTML document field, but the main two are listed. Although, if you approach these features creatively, you can organize truly amazing interface capabilities of working with 1C.

Using external reports and processing.

Attention! External reports and processing - on the one hand, is a convenient way to implement additional printable forms, routine reporting, specialized reports, on the other hand, it is a potential way to bypass many system security restrictions and disrupt the operation of the application server (for an example, see above in "Passing parameters"). In the 1C system, there is a special parameter in the set of rights for the role "Interactive opening of external processing", but this does not completely solve the problem - for a complete solution, it is necessary to significantly narrow the circle of users who can manage external printing forms, routine reports and other standard features of standard solutions implemented using external treatments. For example, by default in the SCP, all basic user roles have the ability to work with the reference book of additional printable forms, and this is, in fact, the ability to use any external processing.

Use of standard mechanisms of typical solutions and platform (data exchange)

Some of the standard mechanisms are potentially dangerous, and in quite unexpected ways.

Printing Lists

Any list (for example, a directory or register of information) in the system can be printed or saved to a file. To do this, it is enough to use the standard feature available from the context menu and the "Actions" menu:

Keep in mind that virtually everything the user sees in the lists can be output to external files. The only thing that can be advised is to keep a protocol for printing documents on print servers. For especially critical forms, it is necessary to configure the action panel associated with the protected table field so that the option to display the list is not available from this panel, and disable the context menu (see Fig. 6).

Data exchange in a distributed database

The data exchange format is quite simple and is described in the documentation. If a user has the ability to override several files, he can make unauthorized changes to the system (although this is quite time consuming). The ability to create a peripheral base using distributed database exchange plans should not be available to ordinary operators.

XML standard data exchange

In the standard data exchange, which is used for exchange between typical configurations (for example, "Trade Management" and "Enterprise Accounting"), it is possible to specify event handlers for loading and unloading objects in the exchange rules. This is implemented by getting the handler from the file and the "Execute ()" procedure for standard processing of loading and unloading a file (the "Execute ()" procedure is launched on the client side). Obviously, it is not difficult to create such a fake exchange file that will perform malicious actions. For most user roles for generic solutions, exchange is enabled by default.

Recommendation: restrict access to XML-exchange for the majority of users (leave it to information security administrators only). Maintain the logs of the launches of this processing, saving the exchange file, for example, by sending it by e-mail to the IB administrator before loading.

Using generic reports, especially the report console

Another issue is user access by default to generic reports, especially the Report Console report. This report is characterized by the fact that it allows you to fulfill almost any requests to information security, and even if the 1C rights system (including RLS) is configured quite rigidly, it allows the user to get a lot of "extra" information and force the server to execute such a request that will take all resources systems.

Using full screen mode (desktop mode)

One of the most effective ways to organize specialized interfaces with limited access to the functionality of the program is the full-screen mode of the main (and, possibly, the only) form of the interface used. In this case, there are no questions about accessibility, for example, the "File" menu and all user actions are limited by the capabilities of the form used. For more details see "Features of the implementation of the desktop mode" on the ITS disk.

Backup

Backup for the client-server version of 1C can be performed in two ways: uploading data to a file with the dt extension and creating backups using SQL. The first method has quite a lot of disadvantages: exclusive access is required, the creation of a copy itself takes much longer, in some cases (if the IB structure is violated) creating an archive is impossible, but there is one advantage - the minimum size of the archive. For SQL backup, the opposite is true: a copy is created in the background by means of the SQL server, due to its simple structure and lack of compression - this is a very fast process, and as long as the physical integrity of the SQL database is not violated, the backup is performed, but the size of the copy coincides with the true one the size of the IB in the expanded state (no compression is performed). Due to the additional advantages of the MS SQL backup system, it is more expedient to use it (3 types of backups are allowed: full, differential, copy of the transaction log; it is possible to create regularly performed tasks; quickly deployed backup copy and a backup system; implemented the ability to predict the size of the required disk space, etc.). The main points of organizing backup from the point of view of system security are:

  • The need to choose a storage location for backups so that they are not available to users.
  • The need to store backups at a physical distance from the MS SQL server (in case of natural disasters, fires, attacks, etc.)
  • Ability to grant rights to start backups to a user who does not have access to backups.

For more detailed information, refer to the MS SQL documentation.

Data encryption

To protect data from unauthorized access, various cryptographic means (both software and hardware) are often used, but their feasibility largely depends on the correctness of use and the overall security of the system. We will consider data encryption at various stages of data transfer and storage using the most common means and the main design errors of a system using cryptographic means.

There are several main stages of information processing that can be protected:

  • Data transfer between the client part of the system and the application server
  • Data transfer between application server and MS SQL Server
  • Data stored on MS SQL Server (data files on a physical disk)
  • Encryption of data stored in information security
  • External data (in relation to information security)

For data stored on the client side and on the application server (saved user settings, information security list, etc.), encryption is justified only in very rare cases and therefore is not considered here. When using cryptographic tools, one must not forget that their use can significantly reduce the performance of the system as a whole.

General information about cryptographic protection of network connections when using the TCP / IP protocol.

Without protection everything network connections vulnerable to unauthorized surveillance and access. To protect them, you can use data encryption at the network protocol level. To encrypt data transmitted in a local network, IPSec tools provided by the operating system are most often used.

IPSec tools provide encryption of transmitted data using DES and 3DES algorithms, as well as integrity checking using MD5 or SHA1 hash functions. IPSec can operate in two modes: transport mode and tunnel mode. Transport mode is better suited for securing LAN connections. Tunnel mode can be used to organize VPN connections between individual network segments or to secure a remote connection to a local network over open data channels.

The main advantages of this approach are:

  • The ability to centrally manage security using Active Directory tools.
  • Ability to exclude unauthorized connections to the application server and MS SQL server (for example, it is possible to protect against unauthorized addition of information security on the application server).
  • Elimination of "listening" to network traffic.
  • No need to change the behavior of application programs (in this case, 1C).
  • The standard of such a solution.

However, this approach has limitations and disadvantages:

  • IPSec does not protect data from tampering and eavesdropping directly on the source and destination computers.
  • The amount of data transmitted over the network is slightly larger than without using IPSec.
  • When using IPSec, several more load to the central processor.

A detailed description of the implementation of IPSec facilities is beyond the scope of this article and requires an understanding of the basic principles of the IP protocol. In order to properly configure connection protection, read the corresponding documentation.

Separately, it is necessary to mention several aspects of the license agreement with 1C when organizing VPN connections. The fact is that, despite the absence of technical restrictions, when connecting several segments of a local network or remote access of a separate computer to a local network, several basic supplies are usually required.

Data encryption during transmission between the client part of the system and the application server.

In addition to encryption at the network protocol, it is possible to encrypt data at the level of the COM + protocol, which is mentioned in the article "Regulation of user access to the infobase in the client-server version" of the ITS. For implementation, it is necessary to set the "Packet Privacy" Authentication level for calls in Component Services for the 1CV8 application. When set to this mode, the package is authenticated and encrypted, including data, as well as the sender's identity and signature.

Data encryption during transmission between the application server and MS SQL Server

MS SQL Server provides the following data encryption tools:

  • It is possible to use Secure Sockets Layer (SSL) when transferring data between the application server and MS SQL Server.
  • When using the network library Multiprotocol, data encryption at the RPC level is used. This is potentially weaker encryption than SSL.
  • If the Shared Memory protocol is used (this happens if the application server and MS SQL Server are located on the same computer), then encryption is not used in any case.

In order to establish the need to encrypt all transmitted data for a specific MS SQL server, use the "Server Network Utility" utility. Run it and on the "General" tab check the "Force protocol encryption" box. The encryption method is selected depending on the one used by the client application (i.e. by the 1C application server). To use SSL, you must properly configure the Certificate Issuer on your network.

In order to establish the need to encrypt all transmitted data for a specific application server, you need to use the "Client Network Utility" utility (usually located in "C: \ WINNT \ system32 \ cliconfg.exe"). As in the previous case, on the "General" tab, check the "Force protocol encryption" checkbox.

It should be borne in mind that the use of encryption in this case can significantly affect the performance of the system, especially when using queries that return large amounts of information.

In order to more fully protect the connection between the application server and MS SQL Server when using the TCP / IP protocol, we can recommend several changes to the default settings.

First, you can set a port other than the standard one (port 1433 is used by default). If you decide to use a non-standard TCP port for communication, please note that:

  • MS SQL Server and Application Server must use the same port.
  • When using firewalls, this port must be allowed.
  • You cannot set a port that can be used by other applications on the MS SQL Server. For reference, see http://www.ise.edu/in-notes/iana/assignments/port-numbers (URL taken from SQL Server Books Online).
  • When using multiple instances of MS SQL Server, be sure to read the MS SQL documentation for configuration (section "Configuring Network Connections").

Secondly, in the TCP / IP protocol settings on the MS SQL server, you can select the "Hide server" checkbox, which prohibits responses to broadcast requests for this instance of the MS SQL Server service.

Encrypting MS SQL data stored on disk

There is a fairly large selection of software and hardware for encrypting data located on a local disk (this is the standard Windows ability to use EFS, and the use of eToken keys, and third-party programs such as Jetico Bestcrypt or PGPDisk). One of the main tasks performed by these tools is to protect data in the event of loss of media (for example, when the server is stolen). It should be especially noted that Microsoft does not recommend storing MS SQL databases on encrypted media, and this is quite reasonable. The main problem in this case is a significant drop in productivity and possible problems reliability from failures. The second factor complicating the life of the system administrator is the need to ensure that all database files are available at the time the MS SQL service accesses them for the first time (i.e., it is desirable that interactive actions be excluded when connecting the encrypted media).

In order to avoid a noticeable drop in system performance, you can use the ability of MS SQL to create databases in several files. Of course, in this case, the MS SQL database should not be created by the 1C server when creating an infobase, but should be created separately. An example TSQL script with comments is shown below:

USE master
GO
- Create database SomeData,
CREATE DATABASE SomeData
- whose data is entirely located in the filegroup PRIMARY.
ON PRIMARY
- The main data file is located on an encrypted medium ( logical drive E :)
- and has an initial size of 100 MB, can be automatically expanded to 200 MB with
- 20 Mb steps
(NAME = SomeData1,
FILENAME = "E: \ SomeData1.mdf",
SIZE = 100MB,
MAXSIZE = 200,
FILEGROWTH = 2),
- The second data file is located on an unencrypted medium (logical drive C :)
- and has an initial size of 100 MB, can be automatically increased up to the limit
- disk space with a step of 5% of the current file size (rounded up to 64 Kb)
(NAME = SomeData2,
FILENAME = "c: \ program files \ microsoft sql server \ mssql \ data \ SomeData2.ndf",
SIZE = 100MB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 5%)
LOG ON
- Although the transaction log could also be divided into parts, this should not be done,
- since this file changes much more often and is cleaned up regularly (for example, when
- creating a backup copy of the database).
(NAME = SomeDatalog,
FILENAME = "c: \ program files \ microsoft sql server \ mssql \ data \ SomeData.ldf",
SIZE = 10MB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 10)
GO
- It is better to immediately give ownership of the database to the user on whose behalf
- 1C will be connected. To do this, we need to declare the current base
- just created,
USE SomeData
GO
- and execute the sp_changedbowner procedure
EXEC sp_changedbowner @loginame = "SomeData_dbowner"

A small digression about the automatic growth of the data file size. By default, for newly created databases, file sizes are increased in increments of 10% of the current file size. This is a perfectly acceptable solution for small databases, but not very good for large ones: with a database size of, for example, 20 GB, the file should grow by 2 GB at once. Although this event will occur quite rarely, it can last for several tens of seconds (all other transactions are actually idle at this time), which, if it occurs while actively working with the database, can cause some failures. The second negative consequence of proportional gain, which occurs when disk space is almost completely full, is the likelihood of premature failure due to insufficient free space... For example, if a 40 GB disk partition is completely allocated for one database (more precisely, for one file of this database), then the critical size of the database file at which it is necessary to urgently (very urgently, up to interrupting the normal work of users) to reorganize information storage is data file size 35 GB. With a 10-20 MB increment set, you can continue working until 39 GB is reached.

Therefore, although the given listing specifies an increase in the size of one of the database files in increments of 5%, for large database sizes it is better to set a fixed increment of 10-20 MB. When setting the values ​​of the increment for the growth of the size of the database files, it is necessary to take into account that before one of the files in the filegroup reaches the maximum size, the rule applies: the files of one filegroup grow all at the same time, when they are all full. Thus, in the above example, when the SomeData1.mdf file reaches its maximum size of 200 MB, the SomeData2.ndf file will be about 1.1 GB in size.

After creating such a database, even if its unprotected files SomeData2.ndf and SomeData.ldf become available to an attacker, it will be extremely difficult to restore the true state of the database - the data (including information about the logical structure of the database) will be scattered across several files. moreover, the key information (about, for example, which files make up this database) will be in the encrypted file.

Of course, if the storage of database files using cryptographic means is used, then the backup (at least these files) should not be carried out on an unencrypted medium. Use the appropriate "BACKUP DATABASE" command syntax to back up individual database files. Please note that despite the possibility of protecting the database backup with a password (the "PASSWORD =" and "MEDIAPASSWORD =" options of the "BACKUP DATABASE" command), such a backup does not become encrypted!

Encrypting application server and client data stored on disks

In most cases, storage of files used by 1C: Enterprise (client side and application server) on an encrypted medium cannot be considered justified due to unreasonably high costs, however, if such a need exists, please note that the application server and the client side of the application very often create temporary files. Often, these files can remain after the application is finished, and it is almost impossible to guarantee their removal by means of 1C. Thus, it appears to encrypt the directory used for temporary files in 1C or not store it on disk using RAM-drive (the latter option is not always possible due to the size of the generated files and the requirements for the amount of RAM of the 1C: Enterprise application itself).

Data encryption with built-in 1C tools.

Standard possibilities for using encryption in 1C are reduced to using objects for working with Zip files with encryption parameters. The following encryption modes are available: AES algorithm with a key of 128, 192 or 256 bits and an outdated algorithm originally used in the Zip archiver. Zip files encrypted with AES are not readable by many archivers (WinRAR, 7zip). To generate a file containing encrypted data, you must specify a password and encryption algorithm. The simplest example encryption-decryption functions based on this capability are given below:

Function EncryptData (Data, Password, Encryption Method = Undefined) Export

// Write the data to a temporary file. In fact, not all data can be saved this way.
ValueInFile (TempFileName, Data);

// Write temporary data to the archive
Zip = New ZipFile Record (Temporary ArchiveFileName, Password, Encryption Method);
Zip.Add (TemporaryFileName);
Zip.Write ();

// Read data from the received archive into RAM
EncryptedData = NewValueStore (New BinaryData (TemporaryArchiveFileName));

// Temporary files - delete

EndFunction Function DecryptData (EncryptedData, Password) Export

// Attention! The correctness of the passed parameters is not monitored

// Write the passed value to a file
TempFileName = GetTempFileName ("zip");
BinaryArchiveData = EncryptedData.Get ();
BinaryArchiveData.Write (TemporaryArchiveFileName);

// Extract the first file of the newly written archive
TempFileName = GetTempFileName ();
Zip = New ReadZipFile (TemporaryArchiveFileName, Password);
Zip.Extract (Zip.Elements, TemporaryFileName, ZipFilePath RecoveryMode.Do notRecover);

// Read the written file
Data = ValueFromFile (TemporaryFileName + "\" + Zip.Elements.Name);

// Delete temporary files
DeleteFiles (TemporaryFileName);
DeleteFiles (TempArchiveFileName);

Return Data;

EndFunction

Of course, this method cannot be called ideal - data is written to a temporary folder in clear text, the performance of the method, frankly, is nowhere worse, storage in the database requires an extremely large amount of space, but this is the only way that is based only on the built-in mechanisms of the platform. In addition, it has an advantage over many other methods - this method simultaneously packs data with encryption. If you want to implement encryption without the drawbacks that this method has, then you must either implement them in an external component, or refer to existing libraries through creating COM objects, for example, using Microsoft CryptoAPI. As an example, we will give the functions for encrypting / decrypting a string based on the received password.

Function EncryptStringDES (UnencryptedString, Password)

CAPICOM_ENCRYPTION_ALGORITHM_DES = 2; // This constant is from CryptoAPI


Encryption Engine.Content = UnencryptedString;
Encryption Engine.Algorithm.Name = CAPICOM_ENCRYPTION_ALGORITHM_DES;
EncryptedString = Encryption Engine.Encrypt ();

Return EncryptedString;

EndFunction // EncryptStringDES ()

Function DecryptStringDES (EncryptedString, Password)

//Attention! Parameters are not checked!

Encryption Engine = New COMObject ("CAPICOM.EncryptedData");
Encryption Mechanism.SetSecret (Password);
Attempt
Encryption Engine.Decrypt (EncryptedString);
Exception
// Wrong password!;
Refund Undefined;
End of Attempts;

Return Encryption Engine.Content;

EndFunction // DecryptStringDES ()

Note that passing an empty value as a string or password to these functions will generate an error message. The string obtained using this encryption procedure is slightly longer than the original. The specificity of this encryption is such that if you encrypt a string twice, the resulting strings will NOT be identical.

The main mistakes when using cryptographic tools.

When using cryptographic tools, the same mistakes are very often made:

Underestimating the performance degradation when using cryptography.

Cryptography is a task that requires a fairly large amount of computation (especially for algorithms such as DES, 3DES, GOST, PGP). And even in the case of using efficient and optimized algorithms (RC5, RC6, AES), there is no getting away from unnecessary data transfer in memory and computational processing. And this almost negates the capabilities of many server components (RAID arrays, network adapters). When using hardware encryption or hardware key derivation for encryption, there is an additional possible performance bottleneck: the speed of data transfer between the additional device and the memory (and the performance of such a device may not play a decisive role). When using encryption of small amounts of data (for example, mail messages), the increase in the computational load on the system is not so noticeable, but in the case of total encryption of everything and all this can greatly affect the performance of the system as a whole.

Underestimation modern opportunities on the selection of passwords and keys.

At the moment, the capabilities of the technology are such that a key with a length of 40-48 bits can be picked up by a small organization, and a key with a length of 56-64 bits - by a large organization. Those. algorithms that use a key of at least 96 or 128 bits must be used. But most keys are generated using hash algorithms (SHA-1, etc.) based on passwords entered by the user. In this case, a 1024-bit key may not help either. First, a password that is easy to guess is often used. Factors that facilitate selection are: use of only one case of letters; the use of words, names and expressions in passwords; using known dates, birthdays, etc .; using "templates" when generating passwords (for example, 3 letters, then 2 numbers, then 3 letters throughout the organization). A good password should be a fairly random sequence of both case letters, numbers, and punctuation marks. Passwords entered from the keyboard up to 7-8 characters long, even if these rules are followed, can be picked in a reasonable time, so it is better that the password be at least 11-13 characters long. The ideal solution is to refuse to generate a key using a password, for example, using various smart cards, etc., but in this case it is necessary to provide for the possibility of protecting against the loss of the encryption key carrier.

Insecure storage of keys and passwords.

Typical examples of this error are:

  • long and complex passwords written on stickers glued to the user's monitor.
  • storing all passwords in a file that is not protected (or protected much weaker than the system itself)
  • storage of electronic keys in the public domain.
  • frequent transfer of electronic keys between users.

Why make an armored door if the key is under the rug by the door?

Transferring initially encrypted data to an insecure environment.

When organizing a security system, make sure that it fulfills its purpose. For example, I came across a situation (not related to 1C), when the initially encrypted file when the program was running in open form was placed in a temporary folder, from where it could be easily read. It is not uncommon for backups of encrypted data in clear form to lie somewhere "not far" from this data.

Misuse of cryptographic tools

With encryption of transmitted data, the data cannot be expected to be unavailable where it is used. For example, IPSec services do not in any way prevent the application server from listening to network traffic at the application level.

Thus, in order to avoid errors in the implementation of cryptographic systems, before deploying it, you should (at least) do the following.

  • Find out:
    • What do you need to protect?
    • What method of protection should you use?
    • For which parts of the system do you need to provide security?
    • Who will control access?
    • Will encryption work in all the right places?
  • Determine where the information is stored, how it is sent over the network, and the computers from which the information will be accessed. This will provide information on network speed, capacity, and utilization prior to system deployment, which is useful for optimizing performance.
  • Assess the vulnerability of the system for different types attacks.
  • Develop and document a system security plan.
  • Evaluate the economic efficiency (justification) of using the system.

Conclusion

Of course, in a cursory review it is impossible to indicate all aspects related to security in 1C, but let us draw some preliminary conclusions. Of course, this platform cannot be called ideal - it, like many others, has its own problems of organizing a secure system. But this in no way means that these problems cannot be circumvented, on the contrary, almost all the shortcomings can be eliminated with the correct development, implementation and use of the system. Most of the problems arise from insufficient elaboration of a specific application solution and its execution environment. For example, typical solutions without significant changes simply do not imply the creation of a sufficiently secure system.

This article once again demonstrates that any set of security measures should cover all stages of implementation: development, deployment, system administration and, of course, organizational measures. In information systems, it is the "human factor" (including users) that is the main security threat. This set of measures must be reasonable and balanced: it makes no sense and it is unlikely that sufficient funds will be allocated to organize protection, which exceeds the cost of the data itself.

Company Is a unique service for buyers, developers, dealers and affiliate partners. In addition, it is one of the best online software stores in Russia, Ukraine, Kazakhstan, which offers customers a wide assortment, many payment methods, prompt (often instant) order processing, tracking the order fulfillment process in the personal section.

The Informix® DataBlade ™ API Programmer's Guide is available for download at. The Managing Stack Space section describes how to create custom functions (UDRs). This article provides additional information and debugging tips.

The information below is valid whether the UDR is running on a user-defined virtual processor (VP) or on a CPU VP. The thread stack can be moved to a user-defined virtual processor just before the UDR is executed.

What size stack is allocated for UDRs?

The size of the stack available for a UDR depends on how the UDR was created:

    using the STACK modifier, which allows the UDR to use its own dedicated stack,

    without the STACK modifier, which means that the UDR will share the stack allocated by the server with the thread making the request. The stack size in this case will be determined by the value of the STACKSIZE parameter in the onconfig configuration file.

STACK modifier

The CREATE PROCEDURE or CREATE FUNCTION expressions have an optional STACK modifier that allows you to specify the amount of stack space, in bytes, that the UDR needs to execute.

If you use the STACK modifier when creating a UDR, the server will allocate and deallocate stack space each time the UDR is executed. The actual available size is the STACK value in bytes minus some overhead depending on the number of function arguments.

If the STACK value is less than the STACKSIZE value in the onconfig file (see next section), then the size of the stack allocated for the UDR will be automatically rounded to the STACKSIZE value.

STACKSIZE configuration parameter

The onconfig configuration file includes a STACKSIZE parameter that defines the default stack size for user threads.

If you do not specify STACK when creating a UDR, the server does not allocate additional stack space to execute that UDR. Instead, the UDR uses the stack space allocated to fulfill the request. The available stack size will depend on the overhead of executing the function at the SQL level.

A thread stack is allocated once for a specific thread making a request. Performance is better when the UDR shares one thread with a stack, since the server does not waste resources on allocating an additional stack for each UDR call. On the other hand, if the size of the used stack UDR approaches the STACKSIZE value, it can cause a stack overflow when calling the function as part of a complex request (in this case, less stack space is available to execute the UDR).

Remember not to set the STACKSIZE too high, as this will affect all user threads.

When do you need to manage the stack size?

Y You must manage stack space if the UDR makes recursive calls, or if the UDR requires more stack space than is available by default on the request thread stack (STACKSIZE).

There are two ways to increase the stack to execute UDRs:

    Specify STACK modifier when creating UDR.

    Use mi_call () to make recursive calls (see the Informix DataBlade API Programmer's Guide for an example).

If you do not specify the size with STACK, and if you do not use mi_call () to increase the current stack, and if the UDR does something that requires a lot of stack space, it will cause a stack overflow.

Note that some functions like mi_ * add a new stack segment for their own execution. These segments are freed upon return to the caller of the UDR.

What if something goes wrong?

Monitoring Stack Usage

The purpose of monitoring is to identify a specific UDR that is causing a stack overflow so that you can change the STACK value specifically for a specific UDR.

    Observing stack usage with "onstat -g sts"

    Observe the session executing the SQL query with "onstat -g ses session_id"

After identification SQL query that results in a stack overflow, you should determine the stack usage by separately executing the UDRs that are part of the original request.

You can dynamically set the STACK value for the UDR. For example:

alter function MyFoo (lvarchar, lvarchar) with (add stack = 131072);

After changing the STACK value, you should test the original query to make sure that it now works stably.

Increase STACKSIZE

Alternatively, try increasing the STACKSIZE value. Check if this solved the problem. (Don't forget to return the old value later).

If increasing STACKSIZE doesn't work, the problem is most likely memory corruption. Here are some suggestions:

    Enable memory scribble and memory pool checking. The "Debugging Problems" section of the Memory Allocation for UDRs article explains how to do this.

    Reconsider using mi_lvarchar. Pay special attention to the places where mi_lvarchar is passed to a function that expects to receive a null-terminated string as an argument.

    Reduce the number of CPU (or user) VPs to one to reproduce the problem faster.

mi_print_stack () - Solaris

Informix Dynamic Server for Solaris OC includes a mi_print_stack () function that can be called in the UDR. By default, this function saves the stack frame to the following file:

/tmp/default.stack

You cannot change the name of the output file, but you can change its location by changing the value of the DBTEMP environment variable. Make sure that the user informix can write to the $ DBTEMP directory. Any errors encountered while executing mi_print_stack () are printed to $ MSGPATH.

This feature is only available for Solaris OC.

Glossary

Terms and abbreviations used in this article:

UDRUser-Defined Routine
VPVirtual Processor

04/14/2016 Version 3.22 Changed interface, fixed errors when transferring registers, changed the procedure for transferring organizations and accounting policies. Platform 8.3.7.2027 BP 3.0.43.174
03/17/2016 Version 3.24 Fixed bugs. Platform 8.3.8.1747 BP 3.0.43.241
06/16/2016 Version 3.26 Fixed bugs. Platform 8.3.8.2088 BP 3.0.44.123
10/16/2016 Version 4.0.1.2 Fixed transfer of value store, changed transfer of accounting policy for releases 3.44. *. Platform 8.3.9.1818 BP 3.0.44.164.
04/19/2017 Version 4.0.2.7 The algorithm for transferring registers associated with directories has been changed, noticed errors have been fixed, transfer with rewriting of links has been fixed.
05/29/2017 Version 4.0.4.5 Changed transfer of movements, added viewing of movements of transferred documents, something else ...
05/30/2017 Version 4.0.4.6 Fixed a bug when filling in the list of existing directories in the source (thanks to shoy)
06/17/2017 Version 4.0.5.1 The algorithm for transferring movements has been changed.
07/19/2017 Version 4.0.5.4 The transfer of CI from BP 2.0 has been changed. Unexpectedly, the transfer from UT 10.3 was carried out by Smilegm, in this version the transfer is slightly corrected for such a situation)))
08/10/2017 Version 4.0.5.5 Fixed errors when transferring from BP 2.0
09/19/2017 Version 4.4.5.7 Fixed connection check for 3.0.52. *
11/28/2017 Version 4.4.5.9 Fixed bugs
12/06/2017 Version 5.2.0.4 The link search algorithm has been redesigned. Added transfer procedures from BP 1.6, there is no more rigid binding to the BP - you can safely use "almost" identical configurations to transfer data. I will try to correct all comments promptly.
12/08/2017 Version 5.2.1.3 Added an algorithm for transferring payroll sheets from BP.2.0 to BP 3.0. Changes are included for sharing between the same configurations.
12/19/2017 Version 5.2.2.2 Corrected the transfer of independent registers of information for directories, which are in the dimensions of these registers.

12/06/2017 New processing version 5.2.0.4. Among the significant changes is the ability to transfer from BP 1.6 to BP 3.0. The main change is the management of the search for referenced links - in previous versions, the search was by GUID, and in this version, you can enable the search "By requisites":

01/17/2018 Version 5.2.2.3 Fixed-noticed errors of subordinate directories and periodic registers of information.

07/19/2018 Version 5.2.2.8 Fixed bugs.

where you can set the search details for any reference book. This regime itself "arose" at the numerous requests of workers, for the case when an exchange is needed in an already existing database, which already contains data (for example, to merge accounting for two organizations into one database).

12/21/2015 Platform 8.3.7.1805 and BP 3.0.43.29 were released, respectively a new version processing 3.1 :-) (description below). New functionality- the ability to compare balances and turnovers between two BP bases (for all accounts, if the charts of accounts coincide, or for separate matching accounts, with or without analytics).
01/03/2016 Version 3.5 - changed the mechanism for connecting to the source database - brought in line with BSP 2.3.2.43. Fixed minor bugs. Platform 8.3.7.1845, BP 3.0.43.50
02/16/2016 Version 3.6 - Added the "Set manual correction" flag for documents transferred with movements. Fixed transfer of movements - documents with a date less than the beginning of the period are transferred without movements. Platform 8.3.7.1917, BP 3.0.43.116
03/22/2016 Version 3.10 - Added the "Always overwrite links" flag for mandatory overwriting of referenced objects (the transfer speed is significantly reduced, but sometimes it is necessary). The "Preparation" tab has been added, where you can configure the correspondence of the source and destination charts of accounts (on a level with account codes) and transfer of constants. Platform 8.3.7.1970, BP 3.0.43.148

04/03/2016 Version 3.11 The filling of the list of documents existing in the source has been changed: filling was done according to movements in the chart of accounts, it was done simply by links for the period, as well as in // website / public / 509628 /

Processing is intended for transferring data for any period similarly to "Unloading and loading MXL" from ITS, only without using XML, JSON and other intermediate files - exchange from database to database via COM. In a version older than 3.10, a connection is used according to the algorithm from the BSP, in which the registration of comcntr.dll is provided (if the OS "allows"), as well as various messages when it is impossible to establish a connection, for example - " Information base is in the process of updating ", etc. Added a check of the choice of the receiver as an IS source - a warning is issued.

Can be used for:

1. Transfer of normative and reference information (NSI) from the IS source to the IS receiver (the transfer of the entire NSI is performed at the request of the user, the necessary reference books, etc. are transferred by links for any transfers).

2. Transfer of documents for any selected period.

3. Transfer of all information from the "broken" IB, if it is started in 1C: Enterprise mode, and data upload or launch of the Configurator is impossible.

Processing feature - the information security of the receiver and the source can be different transfer from 2.0 to 3.0 - the editions are different, but the transfer works !!! Mismatched attributes are ignored, or transfer algorithms must be specified for them.

Comment: Data conversion is NOT USED! And don't ask why !!! For the most corrosive - BP 3.0 changes almost every day, there is no longer any strength to keep the transfer rules up to date - everything is easier here :-).

Another processing feature is that it is launched in the receiver's IS (the closest analogs in terms of functionality work the other way around - from the source to the receiver).

Start of work - you must specify the processing period, indicate the organization from the source, it will be transferred to the receiver.

When the organization is transferred, the accounting policy and the "accompanying" information registers are transferred. Therefore, when you first select an organization in the source, it will take some time before it appears in the receiver.

Charts of accounts of the source and the receiver must be the same, no different accounts in versions 2. * are transferred to the receiver, the setting of the correspondence of accounts and analytics is planned to be included in the future. Accounts are transferred by codes that are not found in the receiver DO NOT CREATE !!!

The rest of the objects are transferred by internal identifiers (GUID), so you should pay attention to some key reference books, for example - Currencies.

If you plan to exchange with a "clean" base, then it is better to delete the reference books filled in at the first start before the exchange. For this, a page is provided in the processing, where you can get these elements of the directories and delete them. At the very least, you need to remove the "RUB" currency. - since duplication is almost inevitable (in principle, this is easily corrected after exchanging the search and replacement of duplicates built into BP 3.0).

In processing, a call to the page for deleting directories was presumed, when the initial filling form is open:

When processing is opened, the page for deleting the directories that were filled in during the initial filling will be displayed:

Since version 3.22, the interface has been changed, now all preparatory operations are bookmarked and always available


It is important to check the correspondence of the Chart of Accounts of the source and the receiver and be sure to indicate the correspondence of the accounts.

There is no need to delete predefined directory elements - they are transferred by configuration identifiers (not GUID).

You can select objects for transfer using the selection form from directories and documents (The information registers associated with this object will migrate automatically, so you don't need to select them separately).The transfer of registers is temporarily disabled - you need to work out a list of registers for transfer - something must be transferred, something is not, at this stage, what is transferred in the directories is enough, the list of registers for transfer will be in the template, in future versions.

When exchanging with 2.0, some of the details (for example, Contact Information) is carried over according to the algorithm built into the processing, since they are stored differently for 2.0 and 3.0. The situation is similar with a number of documents (for example, Debt Adjustment).

The list of object types can be filled in differently in version 3.22 it is placed in the submenu, the changes are written in the picture:

There is a simplification of the use of processing - you do not have to select directories for exchange, but simply fill the list of types in the receiver with only those types of directories that have at least one record in the source.

The processing has a built-in layout, which lists the directories that do not need to be transferred from the source to the destination (layout "Exclude from transfer"). You can add (delete) any references to this layout. If you do not need to transfer the entire reference data, it is enough to transfer the documents, the list of which can also be obtained without selecting the types, just fill in all the source documents for which there are transactions.

The transfer of documents with movements is provided, for exchanges 3.0 to 3.0 and the compliance of charts of accounts it works one to one, when exchanging 2.0 to 3.0, errors are possible, therefore it is recommended to transfer documents without movements, and then simply repost them in the receiver. When transferring documents with movements, the "Manual correction" flag is set.

The "Posted" attribute is set in the receiver's documents the same as in the source, but the movements (if they were not transferred) will appear only after documents are posted, for example, using the processing built into BP 3.0 Group document processing (recommended option), or from this processing (there is a button "Post documents" here).

If the processing is planned to be used for permanent exchange, it can be registered in the receiver's IS (the "Register" button). For "one-time" transfers, you can simply use it via File - Open.

12/21/2015 - Version 3.1 platform 8.3.7.1805 and BP 3.0.43.29 (version 2.15 for 3.0.43. * Does not work - the configuration has been changed a lot).

Changed:

Dialogue for choosing a connection option, the Client-server flag is always available, depending on its installation, either the choice of the file base folder or the field with the name of the base on the server and the name of the server itself is available (fixed the error of the dialog version 2.15)

- NEW FUNCTIONAL: The mechanism for reconciliation of residuals and revolutions between the bases of the source and the receiver in varying degrees of detail:


I think the choice of reconciliation options is clear from the figure:


There are differences in use in the thin and thick client - in the thick one, the file comparison window is immediately displayed:


In the thin client, I did not pervert with programmatically pressing buttons, I propose a simple option for displaying the comparison window:


Comparison in a thin client, IMHO, is more convenient, since has navigation buttons for differences, which is more convenient for large volumes of tables than scrolling with the mouse:

03/22/2016 Version 3.10 - Added the "Always overwrite links" flag for mandatory overwriting of referenced objects (the transfer speed is significantly reduced, but sometimes it is necessary). The "Preparation" tab has been added, where you can configure the correspondence of the source and destination charts of accounts (on a level with account codes) and transfer of constants. Platform 8.3.7.1970, BP 3.0.43.148

- NEW FUNCTIONAL: Before transferring documents, it is recommended to check the chart of accounts for consistency in the source and destination, as well as the consistency of the established constants.

To do this, added the "Preparation" tab in which you can set these correspondences:


The algorithm for filling out the table of correspondence of accounts is simple - the turnovers existing in the source are analyzed, and each account found there is searched for a match by the code in the receiver, if no match is found, a line with the account code is displayed in the table, by which you need to select the account of the receiver, it will be used when transfer. The correspondence of the science is established at the level of codes.

To check and transfer the correspondence of the established constants, the corresponding table is used:

We fill in, if necessary - we transfer. Only constants marked with a flag are carried over ...

The program stack is a special area of ​​memory organized on the basis of a LIFO queue (Last in, first out - last in, first out). The name "stack" comes from the analogy of the principle of its construction with a stack of plates - you can put plates on top of each other (the method of adding to the stack, "pushing", "push"), and then picking them up, starting from the top (method of getting value from the stack, "pop", "pop"). The program stack is also called the call stack, the execution stack, the machine stack (so as not to be confused with the "stack" - an abstract data structure).

What is a stack for? It allows you to conveniently organize the call of subroutines. When called, the function receives some arguments; it also needs to store its local variables somewhere. In addition, one should take into account that one function can call another function, which also needs to pass parameters and store its variables. Using the stack, when passing parameters, you just need to put them on the stack, then the called function will be able to "push" them out of there and use them. Local variables can also be stored in the same place - at the beginning of its code, the function allocates part of the stack memory, when control returns, it clears and frees it. Programmers in high-level languages ​​usually do not think about such things - the compiler generates all the necessary routine code for them.

Consequences of an error

Now we come very close to the problem. In its abstract form, a stack is an infinite storage to which new elements can be added infinitely. Unfortunately, in our world everything is finite - and memory under the stack is no exception. What happens if it ends when function arguments are pushed onto the stack? Or does the function allocate memory for its variables?

An error called a stack overflow will occur. Since the stack is necessary for organizing the call of user-defined functions (and almost all programs on modern languages, including object-oriented ones, are somehow built on the basis of functions), they will no longer be able to be called. Therefore, the operating system takes control, clears the stack, and terminates the program. Here you can emphasize the difference between and stack overflow - in the first case, an error occurs when accessing the wrong memory area, and if there is no protection at this stage, it does not manifest itself at that moment - with a successful coincidence of circumstances, the program can work normally. If only the memory that was being accessed was protected, it happens. In the case of a stack, the program terminates without fail.

To be precise, it should be noted that this description of events is only true for compilers compiling to native code. In managed languages, the virtual machine has its own stack for managed programs, the state of which is much easier to monitor, and you can even afford to throw an exception to the program when an overflow occurs. In the C and C ++ languages, such a "luxury" cannot be counted on.

Reasons for the error

What can lead to such an unpleasant situation? Based on the mechanism described above, one option is too many nested function calls. This scenario is especially likely when using recursion. Infinite recursion (in the absence of a mechanism for "lazy" evaluation) is interrupted in this way, unlike, which sometimes has a useful application. However, with a small amount of memory allocated for the stack (which, for example, is typical for microcontrollers), a simple sequence of calls may be sufficient.

Another option is local variables that require a lot of memory. Having a local array of a million elements, or a million local variables (you never know what happens) is not the best idea. Even a single call to such a greedy function can easily cause a stack overflow. To obtain large amounts of data, it is better to use the mechanisms of heap memory, which will allow you to handle the error of its shortage.

However, heap memory is quite slow in terms of allocation and deallocation (since the operating system does this), and when direct access you have to manually allocate it and release it. Memory in the stack is allocated very quickly (in fact, you just need to change the value of one register), in addition, destructors are automatically called for objects allocated on the stack when the function returns and the stack is cleared. Of course, the desire to get memory from the stack immediately arises. Therefore, the third way to overflow is self-allocation of memory in the stack by the programmer. The C library provides the alloca function specifically for this purpose. It is interesting to note that if the function for allocating dynamic memory malloc has its "twin" for freeing it free, then the function alloca does not have it - the memory is freed automatically after the return of control of the function. Perhaps this only complicates the situation - after all, it will not be possible to free memory before exiting the function. Even though, according to the man page, "alloca is machine and compiler dependent; on many systems its implementation is problematic and contains many bugs; its use is very frivolous and frowned upon" - it is still used.

Examples of

As an example, consider the code for recursive file search located on MSDN:

Void DirSearch (String * sDir) (try (// Find the subfolders in the folder that is passed in. String * d = Directory :: GetDirectories (sDir); int numDirs = d-> get_Length (); for (int i = 0; i< numDirs; i++) { // Find all the files in the subfolder. String* f = Directory::GetFiles(d[i],textBox1->Text); int numFiles = f-> get_Length (); for (int j = 0; j< numFiles; j++) { listBox1->Items-> Add (f [j]); ) DirSearch (d [i]); )) catch (System :: Exception * e) (MessageBox :: Show (e-> Message);))

This function gets a list of files in the specified directory, and then calls itself for those list items that turned out to be directories. Accordingly, with a sufficiently deep tree of the file system, we will get a logical result.

An example of the second approach, taken from the question "Why does a stack overflow occur?" from a site called Stack Overflow (the site is a collection of questions and answers on any programming topic, not just stack overflow, as it might seem):

#define W 1000 #define H 1000 #define MAX 100000 // ... int main () (int image; float dtr; initImg (image, dtr); return 0;)

As you can see, in the main function, memory is allocated on the stack for arrays of int and float types with a million elements each, which in total gives a little less than 8 megabytes. Considering that by default Visual C ++ reserves only 1 megabyte for the stack, the answer becomes obvious.

And here's an example taken from the Lightspark Flash player project's GitHub repository:

DefineSoundTag :: DefineSoundTag (/ * ... * /) (// ... unsigned int soundDataLength = h.getLength () - 7; unsigned char * tmp = (unsigned char *) alloca (soundDataLength); // .. .)

Hopefully h.getLength () - 7 will not be too large to avoid overflow on the next line. But is the time saved on memory allocation worth the "potential" program crash?

Outcome

Stack overflow is a fatal error that most commonly affects programs containing recursive functions. However, even if the program does not contain such functions, overflow is still possible due to the large volume of local variables or an error in the manual allocation of memory on the stack. All classical rules remain in force: if there is a choice, it is better to prefer iteration instead of recursion, and also not to do manual work instead of the compiler.

Bibliographic list

  • E. Tanenbaum. Computer architecture.
  • Wikipedia. Stack overflow.
  • Stack Overflow. Stack overflow C ++.

The stack, in this context, is the last one in the first buffer that you allocate during your program execution. Last, first (LIFO) means that the last thing you put is always the first thing you return - if you hit 2 items on the stack, "A" and then "B", then the first thing you pop off the stack will be "B" and the next thing will be "A".

When you call a function in your code, the next command after the function call is stored on the stack and any memory space that can be overwritten by the function call. The selected function can use more stack for its own local variables. When it is done, it will free up the local variable space it was using and then revert to the previous function.

Stack overflow

Stack overflow is when you have used more memory for the stack than your program intended to use. In embedded systems, you can only have 256 bytes for the stack, and if each function is 32 bytes, then you can only have 8 function calls to function 2 with deep function 1, which calls function 3, which calls function 4 ... who calls function 8, which calls function 9, but function 9 overwrites memory outside of the stack. This can overwrite memory, code, etc.

Many programmers make this mistake by calling function A, which then calls function B, which then calls function C, which then calls function A. It can work most of the time, but just one wrong input will cause it to circle forever until the computer learns that the stack is full.

Recursive functions cause this as well, but if you are writing recursively (i.e. your function calls itself) then you need to be aware of this and use static / global variables to prevent infinite recursion.

Typically, the OS and the programming language you are using manage the stack, and this is out of your hands. You should look at your call graph (a tree structure that shows from your main point what each function calls) to see how deep your function calls are, and identify loops and recursion that are not intended. Intentional loops and recursion must be artificially checked for error if they call each other too many times.

Apart from good programming practices, static and dynamic testing, there is not much you can do on these systems. high level.

Embedded systems

In the embedded world, especially high-reliability code (automotive, aviation, space), you do extensive code checks and validation, but you also do the following:

  • Disallow Recursion and Loops - Policy Compliance and Testing
  • Keep code and stack far apart (code in flash, stack in RAM and will never match)
  • Place guard stripes around the stack - an empty area of ​​memory that you fill with a magic number (usually an interrupt program, but there are many options here), and hundreds or thousands of times per second you look at the guard strips to make sure they haven't been overwritten.
  • Use memory protection (i.e. do not execute on the stack, do not read or write directly onto the stack)
  • Interrupts do not call secondary functions - they set flags, copy data, and let the application take care of handling it (otherwise, you might get 8 deep in your function call tree, have an interrupt, and then a few more functions inside the interrupt exiting, causing a burst). You have multiple call trees - one for the main processes and one for each interrupt. If your interrupts can interrupt each other ... well, there are dragons ...

High-level languages ​​and systems

But in high-level languages ​​running on operating systems:

  • Reduce local storage variables (local variables are stored on the stack), although compilers are pretty smart about this and will sometimes put large chunks on the heap if your call tree is small)
  • Avoid or severely restrict recursion
  • Don't interrupt your programs too far into smaller and smaller functions - even disregarding local variables, each function call consumes up to 64 bytes on the stack (32-bit processor, keeping half of the processor registers, flags, etc.).
  • Keep the call tree shallow (similar to above)

Web servers

It depends on the sandbox you have, whether you can control or even see the stack. Chances are you can handle webservers like any other high level language and operating system, is pretty much out of your hands, but check the language used and the server stack. For example, it is possible to split the stack on your SQL server.