jueves, 25 de febrero de 2010

Simulation Based Reliability Assesment (SBRA)

This is the resume of the papers:
Both developed under supervision of Pavel Marek (see previous post about structural engineering conception).

The first one is very interesting since allows comparison between the actual "State of the Art" (i.e. Limit State approach to design), and the proposed explicitly probabilistic approach.
It describes a step-by-step process of analysis and design comparing both methods:
  • Loading and load combination
    • LRFD/Limit States
      • Each load is expressed by nominal value and its load factor
      • Load combinations are determined according to rules established in the Codes
    • SBRA
      • Each load is expressed by a load duration curve and its corresponding histogram
      • Load combination employs Monte Carlo sampling over some 10 million iterations of the analysis
  • Resistance and reference values
    • LRFD/Limit States
      • Resistances are combined with different resistance factors associated with failure mechanisms (yield stress, compression, flexure, shear,...) also provided by regulations
      • Design capacity=Nominal capacity x provided failure factor
    • SBRA
      • Reference value corresponds to the onset of the stress/deformation curve of the material when reaching yielding, or to a tolerable deformation
      • An histogram of yield stresses is used to feed each analysis iteration altogether with the histograms for the loads mentioned before
  • Frame analysis
    • LRFD/Limit States
      • One single iteration is made
      • Direct 2nd order analysis can be used, in order to check stabilities, but the most common approach is to do a 1st order analysis and then modificate it
    • SBRA
      • Analysis is repeated 10 million times using each time the values for loads and resistances extracted from the corresponding histograms. In these histograms, the more probable a load or resistance value, the more often will be used. An histogram is also called Probability Density Function in Monte Carlo literature.
      • Initial imperfections are explicitly taken into account
      • There is no need to check individual stability of columns (2nd order)
      • Resistance is related to the onset of yielding
  • Safety assessment
    • LRDF/Limit States
      • Once analysis is computed, for each element we check that the relationship Demand/Capacity is smaller than one
    • SBRA
      • Probability of failure Pf is compared against target probability Pd provided by the codes
      • P[(RV-LE)<0]=Pf
        • RV=yielding stress
        • LE=calculated stress
The proposal is very interesting in terms of  providing a controlled design and an explicit probabilistic model of the structure.

Nevertheless, anyone would encounter the main drawback in the need for iterating 10 million times the whole structure, specially when it comes to real-time Lagrangian dynamics, as we are aimed to.
The steps for modelling the forces and displacements remain the same, it is, the traditional stiffness matrix.
Further research shows that the main time-consuming step into the stiffness matrix procedure (also applicable to FEM) is that of the assembly of the stiffness matrix (60-80%).
This means that, once a configuration is achieved and assembled, solving the matrix with different load cases should be relatively quick. Which is a relief if we want to implement SBRA.
Nevertheless, I still feel like I should pay deeper attention to this speed subject, and obtain my own benchmarking.

    miércoles, 24 de febrero de 2010

    From Deterministic to Probabilistic Way of Thinking in Structural Engineering

    This is a very interesting article from the AECEF (Association of European Civil Engineering Faculties)!

     

    From Deterministic to Probabilistic Way of Thinking in Structural Engineering

    July 9, 2001. Adjusted Reprint: European Association of Civil Engineering Faculties - Newsletter 2001/ 1 p.


    Prof. Ing. Pavel Marek, PhD., DrSc., F.ASCE, Prague pmarek@noise.cz
    Prof. Jacques Brozzetti, Dr.h.c., Paris


    Prestige of structural engineering

    One of the speakers at the 1997 Congress of the American Society of Civil Engineers pointed out a significant drop of prestige of structural engineering. In the history of engineering education at the civil engineering departments in the USA this field used to occupy the highest echelon; therefore, the speaker was trying to find the explanation of this drop. He stated among others that the number of causes might include also the preference granted to other specializations due to general interests concerned with, for example, the development of transport networks and environment protection. In his opinion, however, the said drop might be connected also with the development of computers, structural design codes and corresponding software leading sometimes to such education of structural engineers which concentrates gradually on the application of sophisticated software, requiring from the designer merely the introduction of adequate input data, while the computer spouted almost immediately the dimensions of the structure, its reliability assessment and the whole documentation required by respective standards. In such process the designers need not even know the details of dimensioning or the substance of the reliability assessment. Is this opinion justified? Does the development of computer-aided structural design really belong among the causes of the above mentioned drop of prestige observed not only in the USA, but also in other countries?

    Is the structural engineer the creator of structures or merely an interpreter of codes?

    Together with the manufacturer and erector, the designer has been, and will always remain, the creator of the structure. His activities are based on professional knowledge, experience and cooperation with related professions. Although the computer revolution is providing ever more powerful instruments facilitating and accelerating his work, these instruments and how ever perfect codes can never replace the designer, responsible for effectiveness and reliability of his work.
    The design codes and standards cannot cover all situations the designer may encounter, i.e., loading alternatives, performance conditions, etc. Often he has to decide himself on the basis of his own knowledge and experience in accordance with the „rules of the game“ of reliability assessment. In respect to safety, serviceability and durability of structures the development of codes and standards in the past few decades has resulted in a certain damping of the creative role of the designer who has become merely an interpreter of the rules and criteria formulated in the standard. The „rules of the game“ (i.e. the theoretical foundations of reliability assessment) of the Partial Factor Design (PFD, in the USA called Load and Resistance Factor Design, LRFD) are given in the codes excessively simplified and the designer during his education is not fully acquainted with their substance. The instructors often use the wording „the code states...“, thus avoiding the explanation of the problems with which they are often not thoroughly acquainted themselves. To be accurate, they cannot be thoroughly acquainted with them, as the commentaries and data explaining fully and consistently the background of the codes, various simplifications and the influence of calibration, are not available. The consequences of this development are that the students at departments of civil engineering are often educated primarily in the interpretation of codes and not in the creative engineering way of thinking. This fact must be afforded full attention.
    Let us recall the related experience with the introduction of the PFD concept into AISC design codes in the USA In the field of steel structures a standard based on the LRFD method was issued in the USA. in 1986. This method is actually an analogy of the PFD method – see the Eurocodes. The code was introduced in order to replace the standards based on the deterministic Allowable Stress Design method (ASD). Although the issue of the LRFD standard was preceded by an extensive explanatory campaign and training courses emphasizing the advantages of the LRFD method, today – 15 years after it has been introduced – it is applied merely by one quarter of designers while the prevailing majority of designers in the country of the tallest buildings, biggest bridges and other unique structures, has remained faithful to the excessively simplified, but understandable deterministic ASD method (Iwankiw N. AISC, Chicago. Personal communication. 2000). This seemingly conservative attitude of the designers is usually explained by unsatisfactory teaching of the LRFD method at universities. However, the number of principal causes of reserve on the part of experienced USA designers may include their feeling that the LRFD method, developed in the pre-computer era, has been submitted to the designers too late and that it no longer provides qualitatively new possibilities corresponding with the computer era in respect of reliability assessment.

    From slide-rule era to computer era in structural design

    In the courses of steel, concrete and timber structures the students of civil engineering faculties may hear from some of their instructors that due to the introduction of the Eurocodes „nothing much will happen in the field of reliability assessment in the next few decades“. This is the statement which must be contradicted. The expansion of fast improving computers to the desk of every structural designer has produced profound qualitative and quantitative changes which have no analogy in the whole history of this field. The growing computer potential improves the prerequisites for the „re-engineering“ of the whole design process (i.e. its fundamental re-assessment and re-working) to adapt it to entirely new conditions and possibilities. We can follow with admiration the fast development of software for the analysis and dimensioning of structures according to existing codes and the production of their respective drawings. At the same time it is necessary to emphasize that entirely unsatisfactory attention has been afforded so far to the development of concepts and corresponding codes based on the qualitatively improved method of reliability assessment corresponding to computer potential available.
    Since the early Sixties, many national and international codes for structural design based on deterministic concept have been replaced by a “semi-probabilistic” Partial Factors Design (PFD) such as that found in the Eurocodes. The PFD concepts have been developed using statistics, reliability theory and probability, however, without considering the computer revolution. The interpretation of the assessment format in codes is similar to the fully deterministic scheme applied in earlier codes except there are applied two partial factors instead of a single factor. The application of PFD does not require the designer to understand the rules hidden in the background of the codes. The semi-probabilistic background of the reliability assessment procedure has been considered by those writing the codes, however, the calibration and numerous simplifications introduced in the final format of codes affected the concept in such a way that the concept is better to be called “prescriptive” instead of “semi-probabilistic” (Iwankiw N. AISC, Chicago. Personal communication. 2000). The designer’s activities are limited to the interpretation of equations, criteria, instructions, factors, and „black boxes“ contained in the codes. The reliability check can be conducted using a calculator, slide rule or even long-hand, while the modern computer serves only as a “fast calculator”. The actual probability of failure and the reserves in bearing capacity cannot be explicitly evaluated using PFD codes. From the designer’s point of view, the application of PFD in practice is still deterministic. A designer‘s direct involvement in the assessment process is not assumed and, therefore, his/her creativity is suppressed.
    Has the computer potential created the prerequisites for a qualitative improvement of the partial factors method? The answer can be illustrated by the following analogy: Is it sufficient to attach a high-efficient jet engine to the gondola of a balloon in order to achieve its incomparably higher velocity and efficiency? It can be concluded that the combination of the balloon and the jet engine will not create a higher quality means of transport. Analogously it is possible to conclude that the partial factors method based on numerous limitations and simplifying assumptions cannot be raised to a qualitatively higher level of the structural reliability assessment concept by its combination with computer potential. It can be concluded that the computer revolution leads to qualitatively new fully probabilistic structural reliability assessment concepts.

    Application of elite research results to structural design codes

    The results of elite research are usually applied to specific fields (offshore structures, space programs) by top-level experts. The conferences, however, lack the papers by research scientists explaining their ideas of the application of their concepts to codes and standards used by hundreds of thousands of designers in their everyday work. Who will bring the message from the elite researchers to the rank and file designers? An understandable explanation of scientific methods of reliability assessment used in the standards accepted by structural designers worldwide is a highly exacting task. However, without its solution the results of elite research remain merely the object of articles in scientific periodicals and the designer remains merely an interpreter of prescriptive codes.
    Research affords attention to the development of risk engineering, fuzzy sets and other methods, while the designer lacks a fundamental, understandable and consistent method of determination of failure probability. Therefore, it is necessary to reassess the rules of the game of the reliability assessment, beginning with the load definition. The present day load expression in codes by the characteristic value and load factors must be replaced with a qualitatively higher form enabling to take into account also the loading history (such as, for example, the so called load duration curves, see book Simulation-based Reliability Assessment for Structural Engineers). With reference to bearing capacity it is necessary to provide a reference value applicable to the computation of the failure probability. Reliability should be expressed by a comparison of the computed failure probability with design target probability.
    The preparedness of designer is the necessary prerequisite for the practical application of the probabilistic concept of reliability assessment. Let us turn our attention to the education of students at civil engineering departments and designers in post-graduate courses.

    Deterministic or probabilistic approach in education?

    Let us ask these questions: Is the approach applied in our courses to the solution of technical problems in structural design, deterministic or probabilistic? Are instructors infusing a deterministic understanding into the “knowledge-base” of their students, or is the fact that we are living in a world defined by random variables already accepted and applied in the educational process? In courses such as Statistics and Probability Models in Structural Engineering, the common textbooks are based on a “classical” approach to statistics and probability theory. Such an approach is limited to analytical and numerical solutions, and does not allow for transparent analysis of reliability functions that depend on the interaction of several random variables. The textbooks mostly remain silent on common real-world problems, such as the probability of failure of a structural component exposed to variable load combinations in which one might consider the contributions of variable yield stress, variable geometrical properties and random imperfections. In structural design courses, the interpretation and application of the existing codes are emphasized; however, students are using the codes without a full understanding of the actual reliability assessment rules and of the meanings of the factors used to express safety, durability, and serviceability of structural components.

    Teaching reliability

    Advances in computer technology allow for using simulation-based approach to solve numerous problems. The Monte Carlo simulation technique has been applied to basic problems in statistics and probability for a long time. The structural reliability assessment using direct Monte Carlo has been taught, for example, since 1989 at San Jose State University, California, and since 1996 at the Department of Civil Engineering, V©B TU Ostrava, Czech Republic, at the graduate and undergraduate levels. The positive response of the students, and their understanding encouraged the instructors. A team of undergraduate students developed, for example, a study proving that the LRFD method is not leading to a balanced safety (see Probabilistic Engineering Mechanics 14 (105 – 118), USA, 1998)). The new generation of civil engineers seems to be anxious to apply advanced computer technology to its fullest including application of simulation techniques in the analysis of multi-variable problems.

    TeReCo project

    With reference to the improvements expected in the field of structural reliability assessment the training of students and designers ranks among the most important tasks. What starting point should be chosen? A transition to the qualitatively higher probabilistic concepts will require the designer to change his way of thinking, i.e. to replace his current “deterministic thought-process” with the probabilistic one. The professional EC Committees consider the training of designers in this respect highly desirable. For this reason the Leonardo da Vinci Agency in Brussels has sponsored the TeReCo Project (Teaching Reliability Concepts using simulation techniques). This long term project had resulted in the work of 33 authors from nine countries being published in the textbook Probabilistic Assessment of Structures Using Monte Carlo Simulation. Background, Exercises and Software. (it is available since June 2001). The book acquaints the readers with the basis of a fully probabilistic structural reliability assessment concept, using as a tool the transparent SBRA method (Simulation-Based Reliability Assessment method documented in textbook Simulation-based Reliability Assessment for Structural Engineers and applied in about hundred papers). The concept allows for bypassing the „design-point” approach as well as the load and resistance factors, and leads to the reliability check expressed by Pf < Pd comparing the calculated probability of failure Pf with the target design probability Pd given in codes. The application of SBRA is explained in the book using 150 solved examples. On the attached CD-ROM, the reader will find the input files and computational tools enabling the duplication of the examples on a PC, the pilot data-base of mechanical properties (expressed by histograms) of selected structural steel grades, selected histograms (loads, imperfections, and more), manuals for computer programs and 55 selected presentations of examples (Microsoft PowerPoint). The book should serve as an aid in teaching undergraduate and graduate students and in introducing the designers to the strategy of the fully probabilistic reliability assessment of elements, components, members and simple systems using direct Monte Carlo simulation and modern PC computers.

    Summary and conclusions

    The structural engineering profession needs new approaches if we want to provide the best possible service to society. We have to consider the transition from the deterministic „way of thinking” to open-minded probabilistic concepts accepting the random character of individual variables involved as well as their interaction. Tools such as simulation techniques and powerful personal computers will contribute to reaching such goals. Students find these techniques easy to learn and thus they do not require the instructor to take a great deal of classroom time to explain the theoretical background. Once in the computer lab, students can explore to their hearts content and gain a fuller understanding of the effects of each parameter on the variability of the final answer. With this understanding students are better informed to make decisions about tradeoffs that need to be made, for example, between service life, durability and safety. The simulation technique should be included in the program of undergraduate and graduate studies and in corresponding textbooks to prepare students for the types of problems they will encounter in the real world, especially for the application of probabilistic structural reliability assessment concepts in the new generation of codes expected to be introduced in the near future. Such approach will make the engineer the creator of the structure and may bring the prestige of structural engineering back to one of the highest positions.

    martes, 23 de febrero de 2010

    Limit State Design

    Limit State Design is the method now used in every regulation (Eurocode, Spanish CTE, USA LRFD, ...), so I think it is a primary target in the steepy road towards stochastic simulation.

    Actually, a lot of bibliography points from Limit State Design to FORM (First Order Reliability Method).
    Stochastic methods are being widely used into these environments (those of FORM).

    But before, lets get a brief review on Limit States:

    Limit States Design require the designer to establish a set of performance criteria (vibration levels, deflection, strength, stability, buckling, twisting, collapse,...) for the designed piece.
    Then, according to its relevance, the variable of design enters within the area of the safety (ULS) or the comfort (SLS):
    • ULS (Ultimate Limit State): Satisfaction of ULS happens when all factored bending, shear, tensile and compressive stresses are below the factored resistances.
      • Stresses are magnified
      • Resistances are reduced
    • SLS (Serviceability Limit State): Satisfaction of SLS happens when deflection, vibration , fissuration,...are within a certain criteria
    Hence, the actual method is then called, in European regulations, Partial Factors method, and in USA LRFD (Load and Resistance Factor Design).
    Here the limits and the factors are prescripted by the regulatory organ, in a probabilistic manner, but are applied in the same deterministic way and within the same deterministic methodology as it has been for decades.

      sábado, 20 de febrero de 2010

      Collision Detection Summary

      Well, well...after a whole week of intense research, I think I already have some insight on Collision Detection so as to try to condense the matter in a post.
      The following is a brief report made out of the references stated at the end:

      INTRODUCTION

      Collision detection is one of the major bottlenecks presented in any physics simulation given its computationally intensive idiosyncrasy.
      It is also one of the main sources of error given that it serves as the basis for every other step in the simulation. Calculated collision points might later be treated as constraints within our solvers, or as force accumulators, and provide a lot of instabilities if they are not precise enough.

      In general, collision detection / proximity queries / penetration depth algorithms are targeted to provide information about just that, to be used later for other things. These things are commonly physics simulations, haptic devices, or robotic simulations, and they require a real time speed of computation and robustness altogether.

      A ROUGH REVIEW ON COLLISION DETECTION METHODOLOGIES FOR POLYGONAL MODELS

      According to literature, it is possible to make an initial distinction depending on the nature of the graphic input data: polygonal models vs non-polygonal.



      This separates the mostly common "soups of vertices" and triangles from solids, implicit surfaces and parametric surfaces.
      Basically, all the researched collision detection methodologies focus on the first, as it is fairly easy to pre-process the others into polygon soups. 
      Furthermore, algorithms for non-polygonal surfaces are very specific and then would limit the input and the end user would be forced to provide information in this "format".

      Once this distinction is made, each of the different algorithms focus on any  or some of the following issues:
      • Pair processing vs N-processing: If a method emphasizes too much on improving precision in the measurement of the distance between two particular meshes, then it becomes too slow and so it has a limit to the amount of bodies the simulation can represent.
      • Static vs Dynamic: Others using the foreknown trajectories of the bodies require an amount of feedback between the dynamics engine and the collision detection package. Then the concept of frame coherence appears. These methods rely on the stable consistency of every time step to make calculations.
      • Rigid vs Deformable: When the meshes represent cloths or fluids (deformable meshes), as the dynamic characteristics of their shape is often complex and essentially non-linear, robustness and efficiency are frequently affected.
      • Large vs Small environment: There is also a problem generalizing the algorithms so they can serve as well in the short as in the long range, to avoid leaps in the numerical results that lead to instabilities and affect robustness.

      BOUNDING VOLUME HIERARCHIES (BVHs)

      An ubiquitous subject encountered in every researched method is that of the Bounding Volume Hierarchies.
      These are a consequence of an iterative simplification of the problem of intersection of N bodies.
      It consists in associating a simpler geometrical shape to our initial complex shape, so calculations of the distances can be made faster.
      As long as couples of shapes gets closer and closer, we iteratively subdivide those rougher representations of them.
      Many types have been devised:
      • k-DOP trees (Discrete Orientation Polytopes)
      • Octrees
      • R-trees
      • Cone trees
      • BSPs (Binary Space Partitions)


      These trees are conformed of different simpler geometrical shapes that would be hierarchically sorted and that would represent our shapes within the tree.
      There are various types:
      • Spheres
      • AABB (Axis Aligned Bounding Boxes)
      • OBBs (Oriented Bounding Boxes)
      • k-DOPs (Discrete Orientation Polytopes, where k means the number of faces)
      • SSV (Swept Sphere Volumes) 
          The main procedure consists of getting statistical data from all the vertices for each mesh in order to get the centroid (the mean value of all of them) and the maximum and minimum values to have the bounding volume (of whatever the chosen shape). Then it is possible to use the specific characteristics of the shape to proceed to the interference calculations.
          With spheres, it is quite straightforward, having their radius and centre. With bounding boxes it is possible to "project" the mesh vertexes coordinates against their sides. With these projections, a set of lists is made for each dimension (called intervals) and then check interference sorting the lists.

          Actually these trees, or hierarchies, are just an abstraction to get different levels of definition during computation. 
          When made between objects far apart, we speak about broadphase.  In this state the nested resulting bounding volumes are simpler and rougher and have little to do with the actual shape they contain.

          When objects get closer then appears the narrowphase. It triggers whenever an intersection between two bounding volumes has been detected in the broadphase. Implies the iterative subdivision of the bounding volume, either until a collision point is found, or a penetration distance is calculated, or a separation distance is obtained.
          In this part many different approaches can be found, with very prolix performance results and assumptions.
          Nevertheless, particularly one sounds louder than others: the GJK algorithm.

          GJK ALGORITHM AND THE MINKOWSKY SUM

          A principal character in this play is the GJK (Gilbert-Johnson-Keerthy) algorithm.
          It implements an algebraic property of every pair of convex polyhedra: the Minkowsky sum.

          Basically this sum (or difference, there is not much consensus but I am not a mathematician either), has the ability of returning a very important concept: when two convex polyhedra are added through it, if they share a part of the space (they intersect), the centroid of them is contained within the resulting polyhedron.


          For every pair of objects that need to have their interference checked (remember, it triggers only between those objects that are close), all that has to be done is to iteratively see whether the features (vertices, edges and faces) are getting close or far from the centroid.
          This is made using the concept of simplex, that can be applied to vertices, edges, triangles or tetrahedra (all of them known as polytopes).
          Making a single loop that iterates through the mesh geometry, it is possible to discard the interference whenever a vector points out from the direction of the centroid of the Minkowsky sum. If none appears, then objects are overlapping.

          This algorithm has an irrefutable efficiency, provided its popularity, and has proven to pass from an O(n2) complexity to O(nlogn) complexity.
          Mainly it comes from the fact that gives a true / false answer very quickly, as it does not need to iterate the whole geometry for the answer (usually comes after a few iterations over the simplex checking algorithm).

          The first problem one encounters is that distances are not straightforward in this method. The way to  obtain penetration distance requires further processing and affects performance negatively when implemented. An error tolerance has to be provided and then successive iterations are made in a very similar manner to  the "sweep and prune" way described above: testing that a "shrinking" polytope from the centroid is as small as this error.

          Another drawback of this method is its limitation to convex polyhedra. Minkowsky sum property does not apply for concave polyhedra.
          This can be tackled using a preprocessing convex shape decomposition that would provide us with sets of Bounding Volume Hierarchies nested within the initial concave shape.

          Last but not least is the instability problem that arises when very small shapes are tested against larger ones. Due to the difference in size, the Minkowsky sum presents extremely oblong shaped facets. This commonly leads to infinite loops.

          OPEN SOURCE COLLISION DETECTION
          The following is a list describing all available Open Source algorithms I have tested and helped me out to figure out the described above:
          • I-COLLIDE collision detection system: is an interactive and exact collision-detection library for environments composed of many convex polyhedra or union of convex pieces, based on the expected constant time, incremental distance computation algorithm  and algorithms to check for collision between multiple moving objects. 
          • RAPID interference detection system: is a robust and accurate polygon interference detection library for pairs of unstructured polygonal models. It is applicable to polygon soups models which contain no adjacency information and obey no topological constraints. It is most suitable for close proximity configurations between highly tesselated smooth surfaces.
          • V-COLLIDE collision detection system: is a collision detection library for large dynamic environments, and unites the nbody processing algorithm of I-COLLIDE and the pair processing algorithm of RAPID. It is designed to operate on large numbers of static or moving polygonal objects to allow dynamic addition or deletion of objects between timesteps.
          • SOLID interference detection system: is a library for interference detection of multiple three-dimensional polygonal objects including polygon soups undergoing rigid motion. Its performance and applicability is comparable to that of V-COLLIDE.
          • SWIFT++: SWIFT++ is a collision detection package capable of detecting intersection, performing tolerance verification, computing approximate and exact distance, or determining the contacts between pairs of objects in a scene composed of general rigid polyhedral models. It is a major extension of the SWIFT system previously released from UNC. Uses the convex polyhedra decompostion.
          • OPCODE: It is similar to popular packages such as SOLID or RAPID, but more memory-friendly, and often faster. Stands for OPtimized COllision Detection.


          • PQP proximity query package: It pre-computes a hierarchical representation
            of models using tightting oriented bounding box trees (OBBTrees). At runtime, the algorithm traverses two such trees and tests for overlaps between oriented bounding boxes based on a separating axis theorem, which takes less than 200 operations in practice.

          viernes, 19 de febrero de 2010

          Collision Detection Sources

          Well, the last two days I've been researching on the topic of Collision Detection (as planned).
          After a short fight against OPCODE (http://www.codercorner.com/Opcode.htm), which stands for OPtimized COllision DEtection, I've been forced to abandon under mere technicalities with its implementation in C++. 
          Just couldn't make it compile under gcc and EasyEclipse...anyway, thanks to Pierre Terdiman (http://www.codercorner.com/blog/).

          I have switched to PQP (http://gamma.cs.unc.edu/SSV/), which claims to be as fast and memory efficient as OPCODE, and I have already managed to make it work...now let's strip it off!

          jueves, 18 de febrero de 2010

          Working Programme

          Well, today is the beginning of the new approach towards the thesis!

          According to my calendar, I still have 7,5 months here in Cercedilla (60km north of Madrid), in my parents' house, with nothing to worry about but "ora et labora"...

          So, it seems reasonable to try to develop a full physics environment of my own, where at least I can control and understand everything that happens on it...maybe later on I can go back to ODE or Bullet!

          According to documentation on Bullet and ODE, common physics engines implement the following:
          • Collision detection
          • Rigid body dynamics
          • Constraints
          • Soft body dynamics
          The thing is that I also want to implement Finite Element Analysis and Stochastic environment...So my research timeline could be as follows:
          • February: Collision detection+Stochastic environment
          • March: Rigid body dynamics+Stochastic environment
          • April: Rigid body dynamics+Finite Element Analysis
          • May: Rigid body dynamics+Constraints+Finite Element Analysis
          • June: Rigid body dynamics+Soft body dynamics
          • July: Finite Element Analysis+Soft body dynamics
          • August: Finite Element Analysis
          • September: Stochastic environment
          Of course, it is important to have in mind that everything doesn't have to be perfectly implemented. A rough and working sample of each item should be enough, as long as the whole system allows for further improvements.
          I count heavily on the already developed code (Bullet and ODE are open source) and on the extensive documentation available (which sometimes is more a burden than a relief).

          There are also two important deadlines:
          • June for the presentation of the thesis proposal to the UPC jury
          • October for the mobility grant to Ljubljana
          Well...time will tell!

          miércoles, 17 de febrero de 2010

          Bullet Physics Engine

          After a few months banging my head against Bullet (http://bulletphysics.org), which apparently could make my life a lot easier:
          Bullet is a Collision Detection and Rigid Body Dynamics Library. The Library is Open Source and free for commercial use, under the ZLib license. This means you can use it in commercial games, even on next-generation consoles like Sony Playstation 3.
          I have chosen to abandon this strategy for the time being given the following conclusions:
          • Physics engines in general try to be as general as possible, becoming extremely complex when it comes to learn how they work.
          • Physics engines are normally game-oriented, and although I haven't been able to prove it yet, their lack of precision might become an important issue when it comes to solve engineering problems.
          • There is no way around in deeply understanding the numerical methods they employ.
          Nonetheless, this is the amount of research done so far:

          With this in mind, I think the next step will be to focus on achieving the knowledge to tackle my own physics engine!

            domingo, 14 de febrero de 2010

            Thesis Proposal


            Introduction

            State-of-the-art

            Non-deterministic methods have been succesfully applied in science since the 50s to tackle uncertanty and to evaluate complexity.
            Lately, these very methods, combined with modern calculus tools (Finite Elemet Method, Applied Element Method, Discrete Element Method, ...) are proving very helpful in automotive, aerospatial and naval industries to achieve sofisticated, reliable and precise designs.
            On the other hand, new simulation tools have appeared in the last decade that were not possible before the increase in computer power and their clustering and parallelizing capabilities.
            Videogame, film and virtual reality industries are using them and simulations flood the media with a quality that often makes it very difficult to separate fiction from truth.
            In the building environment, perhaps given some inherent differences in the requirements of the product provided to society, we are still pretty far from those technological advances.
            This could be seen as a disadvantegeous position but opens, in fact, a whole world of new opportunities for research.
            The thesis proposed here aims to provide a frame where these two not so different disciplines (non-deterministic design methods and computer physics simulation) combine in order to open another path to better design tools.

            The determinstic approach to structural design

            Modern building design and analysis is almost exclusively based on stiffnes matrix method or, in best cases, on Finite Element software.
            These softwares have reached today a considerable level of sophistication and versatility. However, one increasingly important aspect of analysis that these programs are unable to address is that of uncertainty in structural parameters and in loading and boundary conditions.
            It is a well known fact that deterministic single- point evaluation of the response may under many circumstances produce an over-designed and excessively conservative system if the presence of parameter scatter is not taken into account.
            International building codes, historically, have taken this procedure for granted and nowadays Limit States is the csompulsory method for evaluating any building (Eurocodes, ASCE, ACI,...).
            These Limit States are given to the designers on a probability basis but have to be necessarily included into our deterministic analysis in the form of material strenght minoration and force majoration.
            An stochastic approach allows us to define the reliability of our design in a different way: by calculating in an straighforward manner the probabilty of failure (or limit state achieving), and then comparing it against the standards.

            Ordinary Differential Equations and physics simulation

            The second weakness of this deterministic approach to the design of building structures is that tackling non-linearity (buckling of columns, terrain, earthquakes, …) has become an extremely unprecise field, plenty of approximate methods.
            Intensive particle-based Lagrangian methods, on the other hand, is a relatively recent field of research (even though it uses classical newtonian physics), where the phenomena stated above simply arises as a consequence of the simultaneous behaviour.

            Thesis targets

            • Achieve a computer tool with the following features:
            • Real-time ODE based physics.
            • Behavior-monitored structural elements and parameters.
            • Real-time design visualization and designer interaction.
            • Stochastic methods applied to different structural systems and probability-based evaluation of their reliability.
            • Building forensics of existing or failed buildings
            • Evaluate the adequateness of Lagrangian particle-based methods for engineering purposes.